00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2466 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3731 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.156 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.157 The recommended git tool is: git 00:00:00.157 using credential 00000000-0000-0000-0000-000000000002 00:00:00.158 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.198 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.239 Using shallow fetch with depth 1 00:00:00.239 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.239 > git --version # timeout=10 00:00:00.272 > git --version # 'git version 2.39.2' 00:00:00.272 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.133 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.153 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.179 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.180 > git config core.sparsecheckout # timeout=10 00:00:08.214 > git read-tree -mu HEAD # timeout=10 00:00:08.236 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.253 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.254 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.341 [Pipeline] Start of Pipeline 00:00:08.353 [Pipeline] library 00:00:08.354 Loading library shm_lib@master 00:00:08.354 Library shm_lib@master is cached. Copying from home. 00:00:08.366 [Pipeline] node 00:00:08.377 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.378 [Pipeline] { 00:00:08.387 [Pipeline] catchError 00:00:08.388 [Pipeline] { 00:00:08.397 [Pipeline] wrap 00:00:08.405 [Pipeline] { 00:00:08.412 [Pipeline] stage 00:00:08.413 [Pipeline] { (Prologue) 00:00:08.427 [Pipeline] echo 00:00:08.428 Node: VM-host-SM9 00:00:08.432 [Pipeline] cleanWs 00:00:08.441 [WS-CLEANUP] Deleting project workspace... 00:00:08.441 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.447 [WS-CLEANUP] done 00:00:08.647 [Pipeline] setCustomBuildProperty 00:00:08.762 [Pipeline] httpRequest 00:00:09.221 [Pipeline] echo 00:00:09.223 Sorcerer 10.211.164.20 is alive 00:00:09.232 [Pipeline] retry 00:00:09.234 [Pipeline] { 00:00:09.247 [Pipeline] httpRequest 00:00:09.252 HttpMethod: GET 00:00:09.252 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.253 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.268 Response Code: HTTP/1.1 200 OK 00:00:09.269 Success: Status code 200 is in the accepted range: 200,404 00:00:09.269 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.656 [Pipeline] } 00:00:33.674 [Pipeline] // retry 00:00:33.682 [Pipeline] sh 00:00:33.964 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.980 [Pipeline] httpRequest 00:00:34.366 [Pipeline] echo 00:00:34.368 Sorcerer 10.211.164.20 is alive 00:00:34.378 [Pipeline] retry 00:00:34.380 [Pipeline] { 00:00:34.395 [Pipeline] httpRequest 00:00:34.400 HttpMethod: GET 00:00:34.400 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.401 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.421 Response Code: HTTP/1.1 200 OK 00:00:34.422 Success: Status code 200 is in the accepted range: 200,404 00:00:34.422 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:24.503 [Pipeline] } 00:01:24.521 [Pipeline] // retry 00:01:24.529 [Pipeline] sh 00:01:24.811 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:28.111 [Pipeline] sh 00:01:28.391 + git -C spdk log --oneline -n5 00:01:28.391 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:28.391 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:28.391 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:28.391 66289a6db build: use VERSION file for storing version 00:01:28.391 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:28.410 [Pipeline] withCredentials 00:01:28.421 > git --version # timeout=10 00:01:28.433 > git --version # 'git version 2.39.2' 00:01:28.449 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.452 [Pipeline] { 00:01:28.461 [Pipeline] retry 00:01:28.464 [Pipeline] { 00:01:28.479 [Pipeline] sh 00:01:28.935 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:29.206 [Pipeline] } 00:01:29.225 [Pipeline] // retry 00:01:29.230 [Pipeline] } 00:01:29.246 [Pipeline] // withCredentials 00:01:29.256 [Pipeline] httpRequest 00:01:29.637 [Pipeline] echo 00:01:29.639 Sorcerer 10.211.164.20 is alive 00:01:29.649 [Pipeline] retry 00:01:29.651 [Pipeline] { 00:01:29.666 [Pipeline] httpRequest 00:01:29.670 HttpMethod: GET 00:01:29.671 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.671 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.677 Response Code: HTTP/1.1 200 OK 00:01:29.677 Success: Status code 200 is in the accepted range: 200,404 00:01:29.678 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:56.091 [Pipeline] } 00:01:56.108 [Pipeline] // retry 00:01:56.115 [Pipeline] sh 00:01:56.393 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.306 [Pipeline] sh 00:01:58.584 + git -C dpdk log --oneline -n5 00:01:58.584 caf0f5d395 version: 22.11.4 00:01:58.584 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:58.584 dc9c799c7d vhost: fix missing spinlock unlock 00:01:58.584 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:58.584 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:58.600 [Pipeline] writeFile 00:01:58.615 [Pipeline] sh 00:01:58.893 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:58.903 [Pipeline] sh 00:01:59.181 + cat autorun-spdk.conf 00:01:59.181 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.181 SPDK_TEST_NVMF=1 00:01:59.181 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.181 SPDK_TEST_URING=1 00:01:59.181 SPDK_TEST_USDT=1 00:01:59.181 SPDK_RUN_UBSAN=1 00:01:59.181 NET_TYPE=virt 00:01:59.181 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:59.181 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.181 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.188 RUN_NIGHTLY=1 00:01:59.190 [Pipeline] } 00:01:59.204 [Pipeline] // stage 00:01:59.218 [Pipeline] stage 00:01:59.220 [Pipeline] { (Run VM) 00:01:59.234 [Pipeline] sh 00:01:59.513 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.513 + echo 'Start stage prepare_nvme.sh' 00:01:59.513 Start stage prepare_nvme.sh 00:01:59.513 + [[ -n 4 ]] 00:01:59.513 + disk_prefix=ex4 00:01:59.513 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:59.513 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:59.513 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:59.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.513 ++ SPDK_TEST_NVMF=1 00:01:59.513 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.513 ++ SPDK_TEST_URING=1 00:01:59.513 ++ SPDK_TEST_USDT=1 00:01:59.513 ++ SPDK_RUN_UBSAN=1 00:01:59.513 ++ NET_TYPE=virt 00:01:59.513 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:59.513 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.513 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.513 ++ RUN_NIGHTLY=1 00:01:59.513 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:59.513 + nvme_files=() 00:01:59.513 + declare -A nvme_files 00:01:59.513 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.513 + nvme_files['nvme.img']=5G 00:01:59.513 + nvme_files['nvme-cmb.img']=5G 00:01:59.513 + nvme_files['nvme-multi0.img']=4G 00:01:59.513 + nvme_files['nvme-multi1.img']=4G 00:01:59.513 + nvme_files['nvme-multi2.img']=4G 00:01:59.513 + nvme_files['nvme-openstack.img']=8G 00:01:59.513 + nvme_files['nvme-zns.img']=5G 00:01:59.513 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.513 + (( SPDK_TEST_FTL == 1 )) 00:01:59.513 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.513 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.513 + for nvme in "${!nvme_files[@]}" 00:01:59.513 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:59.513 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.513 + for nvme in "${!nvme_files[@]}" 00:01:59.513 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:59.513 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.513 + for nvme in "${!nvme_files[@]}" 00:01:59.513 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:59.513 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:59.513 + for nvme in "${!nvme_files[@]}" 00:01:59.513 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:59.771 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.771 + for nvme in "${!nvme_files[@]}" 00:01:59.771 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:59.771 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.771 + for nvme in "${!nvme_files[@]}" 00:01:59.771 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:59.771 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.771 + for nvme in "${!nvme_files[@]}" 00:01:59.771 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:59.771 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.029 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:00.029 + echo 'End stage prepare_nvme.sh' 00:02:00.029 End stage prepare_nvme.sh 00:02:00.040 [Pipeline] sh 00:02:00.318 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:00.318 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:02:00.318 00:02:00.318 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:00.318 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:00.318 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:00.318 HELP=0 00:02:00.318 DRY_RUN=0 00:02:00.318 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:00.318 NVME_DISKS_TYPE=nvme,nvme, 00:02:00.318 NVME_AUTO_CREATE=0 00:02:00.318 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:00.318 NVME_CMB=,, 00:02:00.318 NVME_PMR=,, 00:02:00.318 NVME_ZNS=,, 00:02:00.318 NVME_MS=,, 00:02:00.318 NVME_FDP=,, 00:02:00.318 SPDK_VAGRANT_DISTRO=fedora39 00:02:00.318 SPDK_VAGRANT_VMCPU=10 00:02:00.318 SPDK_VAGRANT_VMRAM=12288 00:02:00.318 SPDK_VAGRANT_PROVIDER=libvirt 00:02:00.318 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:00.318 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:00.318 SPDK_OPENSTACK_NETWORK=0 00:02:00.318 VAGRANT_PACKAGE_BOX=0 00:02:00.318 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:00.318 FORCE_DISTRO=true 00:02:00.318 VAGRANT_BOX_VERSION= 00:02:00.318 EXTRA_VAGRANTFILES= 00:02:00.318 NIC_MODEL=e1000 00:02:00.318 00:02:00.318 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:00.318 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:03.599 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.165 ==> default: Creating image (snapshot of base box volume). 00:02:04.165 ==> default: Creating domain with the following settings... 00:02:04.165 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734358676_ad36d3ba54b0eba1508a 00:02:04.165 ==> default: -- Domain type: kvm 00:02:04.165 ==> default: -- Cpus: 10 00:02:04.165 ==> default: -- Feature: acpi 00:02:04.165 ==> default: -- Feature: apic 00:02:04.165 ==> default: -- Feature: pae 00:02:04.165 ==> default: -- Memory: 12288M 00:02:04.165 ==> default: -- Memory Backing: hugepages: 00:02:04.165 ==> default: -- Management MAC: 00:02:04.165 ==> default: -- Loader: 00:02:04.165 ==> default: -- Nvram: 00:02:04.165 ==> default: -- Base box: spdk/fedora39 00:02:04.165 ==> default: -- Storage pool: default 00:02:04.165 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734358676_ad36d3ba54b0eba1508a.img (20G) 00:02:04.165 ==> default: -- Volume Cache: default 00:02:04.165 ==> default: -- Kernel: 00:02:04.165 ==> default: -- Initrd: 00:02:04.165 ==> default: -- Graphics Type: vnc 00:02:04.165 ==> default: -- Graphics Port: -1 00:02:04.165 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.165 ==> default: -- Graphics Password: Not defined 00:02:04.165 ==> default: -- Video Type: cirrus 00:02:04.165 ==> default: -- Video VRAM: 9216 00:02:04.165 ==> default: -- Sound Type: 00:02:04.165 ==> default: -- Keymap: en-us 00:02:04.165 ==> default: -- TPM Path: 00:02:04.165 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.165 ==> default: -- Command line args: 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.165 ==> default: -> value=-drive, 00:02:04.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.165 ==> default: -> value=-drive, 00:02:04.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.165 ==> default: -> value=-drive, 00:02:04.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.165 ==> default: -> value=-drive, 00:02:04.165 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:04.165 ==> default: -> value=-device, 00:02:04.165 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.165 ==> default: Creating shared folders metadata... 00:02:04.165 ==> default: Starting domain. 00:02:05.542 ==> default: Waiting for domain to get an IP address... 00:02:23.636 ==> default: Waiting for SSH to become available... 00:02:23.636 ==> default: Configuring and enabling network interfaces... 00:02:26.163 default: SSH address: 192.168.121.8:22 00:02:26.163 default: SSH username: vagrant 00:02:26.163 default: SSH auth method: private key 00:02:28.692 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:35.336 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:40.603 ==> default: Mounting SSHFS shared folder... 00:02:42.505 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.505 ==> default: Checking Mount.. 00:02:43.907 ==> default: Folder Successfully Mounted! 00:02:43.907 ==> default: Running provisioner: file... 00:02:44.473 default: ~/.gitconfig => .gitconfig 00:02:45.040 00:02:45.040 SUCCESS! 00:02:45.040 00:02:45.040 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:45.040 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:45.040 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:45.040 00:02:45.048 [Pipeline] } 00:02:45.062 [Pipeline] // stage 00:02:45.071 [Pipeline] dir 00:02:45.071 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:45.073 [Pipeline] { 00:02:45.086 [Pipeline] catchError 00:02:45.088 [Pipeline] { 00:02:45.099 [Pipeline] sh 00:02:45.377 + vagrant ssh-config --host vagrant 00:02:45.377 + sed -ne /^Host/,$p 00:02:45.377 + tee ssh_conf 00:02:48.660 Host vagrant 00:02:48.660 HostName 192.168.121.8 00:02:48.660 User vagrant 00:02:48.660 Port 22 00:02:48.660 UserKnownHostsFile /dev/null 00:02:48.660 StrictHostKeyChecking no 00:02:48.660 PasswordAuthentication no 00:02:48.660 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:48.660 IdentitiesOnly yes 00:02:48.660 LogLevel FATAL 00:02:48.660 ForwardAgent yes 00:02:48.660 ForwardX11 yes 00:02:48.660 00:02:48.671 [Pipeline] withEnv 00:02:48.673 [Pipeline] { 00:02:48.684 [Pipeline] sh 00:02:48.964 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:48.964 source /etc/os-release 00:02:48.964 [[ -e /image.version ]] && img=$(< /image.version) 00:02:48.964 # Minimal, systemd-like check. 00:02:48.964 if [[ -e /.dockerenv ]]; then 00:02:48.964 # Clear garbage from the node's name: 00:02:48.964 # agt-er_autotest_547-896 -> autotest_547-896 00:02:48.964 # $HOSTNAME is the actual container id 00:02:48.964 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:48.964 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:48.964 # We can assume this is a mount from a host where container is running, 00:02:48.964 # so fetch its hostname to easily identify the target swarm worker. 00:02:48.964 container="$(< /etc/hostname) ($agent)" 00:02:48.964 else 00:02:48.964 # Fallback 00:02:48.964 container=$agent 00:02:48.964 fi 00:02:48.964 fi 00:02:48.964 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:48.964 00:02:49.233 [Pipeline] } 00:02:49.249 [Pipeline] // withEnv 00:02:49.257 [Pipeline] setCustomBuildProperty 00:02:49.272 [Pipeline] stage 00:02:49.274 [Pipeline] { (Tests) 00:02:49.288 [Pipeline] sh 00:02:49.568 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:49.840 [Pipeline] sh 00:02:50.121 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.393 [Pipeline] timeout 00:02:50.393 Timeout set to expire in 1 hr 0 min 00:02:50.395 [Pipeline] { 00:02:50.408 [Pipeline] sh 00:02:50.686 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:51.252 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:51.264 [Pipeline] sh 00:02:51.543 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:51.815 [Pipeline] sh 00:02:52.094 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.368 [Pipeline] sh 00:02:52.646 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:52.905 ++ readlink -f spdk_repo 00:02:52.905 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:52.905 + [[ -n /home/vagrant/spdk_repo ]] 00:02:52.905 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:52.905 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:52.905 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:52.905 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:52.905 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:52.905 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:52.905 + cd /home/vagrant/spdk_repo 00:02:52.905 + source /etc/os-release 00:02:52.905 ++ NAME='Fedora Linux' 00:02:52.905 ++ VERSION='39 (Cloud Edition)' 00:02:52.905 ++ ID=fedora 00:02:52.905 ++ VERSION_ID=39 00:02:52.905 ++ VERSION_CODENAME= 00:02:52.905 ++ PLATFORM_ID=platform:f39 00:02:52.905 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:52.905 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:52.905 ++ LOGO=fedora-logo-icon 00:02:52.905 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:52.905 ++ HOME_URL=https://fedoraproject.org/ 00:02:52.905 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:52.905 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:52.905 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:52.905 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:52.905 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:52.905 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:52.905 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:52.905 ++ SUPPORT_END=2024-11-12 00:02:52.905 ++ VARIANT='Cloud Edition' 00:02:52.905 ++ VARIANT_ID=cloud 00:02:52.905 + uname -a 00:02:52.905 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:52.905 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.423 Hugepages 00:02:53.423 node hugesize free / total 00:02:53.423 node0 1048576kB 0 / 0 00:02:53.423 node0 2048kB 0 / 0 00:02:53.423 00:02:53.423 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.423 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.423 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.423 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:53.423 + rm -f /tmp/spdk-ld-path 00:02:53.423 + source autorun-spdk.conf 00:02:53.423 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.423 ++ SPDK_TEST_NVMF=1 00:02:53.423 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:53.423 ++ SPDK_TEST_URING=1 00:02:53.423 ++ SPDK_TEST_USDT=1 00:02:53.423 ++ SPDK_RUN_UBSAN=1 00:02:53.423 ++ NET_TYPE=virt 00:02:53.423 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:53.423 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.423 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.423 ++ RUN_NIGHTLY=1 00:02:53.423 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.423 + [[ -n '' ]] 00:02:53.423 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.423 + for M in /var/spdk/build-*-manifest.txt 00:02:53.423 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.423 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.423 + for M in /var/spdk/build-*-manifest.txt 00:02:53.423 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.423 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.423 + for M in /var/spdk/build-*-manifest.txt 00:02:53.423 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.423 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.423 ++ uname 00:02:53.423 + [[ Linux == \L\i\n\u\x ]] 00:02:53.423 + sudo dmesg -T 00:02:53.423 + sudo dmesg --clear 00:02:53.423 + dmesg_pid=5995 00:02:53.423 + [[ Fedora Linux == FreeBSD ]] 00:02:53.423 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.423 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.423 + sudo dmesg -Tw 00:02:53.423 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.423 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.423 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.423 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.423 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.423 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.423 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.423 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.423 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.423 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.423 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.423 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.423 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.683 14:18:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:53.683 14:18:45 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.683 14:18:45 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.684 14:18:45 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:53.684 14:18:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:53.684 14:18:45 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.684 14:18:45 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:53.684 14:18:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.684 14:18:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:53.684 14:18:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:53.684 14:18:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.684 14:18:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.684 14:18:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.684 14:18:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.684 14:18:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.684 14:18:45 -- paths/export.sh@5 -- $ export PATH 00:02:53.684 14:18:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.684 14:18:45 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.684 14:18:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:53.684 14:18:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734358725.XXXXXX 00:02:53.684 14:18:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734358725.ZfHfXu 00:02:53.684 14:18:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:53.684 14:18:45 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:53.684 14:18:45 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:53.684 14:18:45 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:53.684 14:18:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:53.684 14:18:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:53.684 14:18:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:53.684 14:18:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:53.684 14:18:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.684 14:18:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:53.684 14:18:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:53.684 14:18:45 -- pm/common@17 -- $ local monitor 00:02:53.684 14:18:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.684 14:18:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.684 14:18:45 -- pm/common@25 -- $ sleep 1 00:02:53.684 14:18:45 -- pm/common@21 -- $ date +%s 00:02:53.684 14:18:45 -- pm/common@21 -- $ date +%s 00:02:53.684 14:18:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734358725 00:02:53.684 14:18:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734358725 00:02:53.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734358725_collect-vmstat.pm.log 00:02:53.684 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734358725_collect-cpu-load.pm.log 00:02:54.619 14:18:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:54.619 14:18:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:54.619 14:18:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:54.619 14:18:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.619 14:18:46 -- spdk/autobuild.sh@16 -- $ date -u 00:02:54.619 Mon Dec 16 02:18:46 PM UTC 2024 00:02:54.619 14:18:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:54.619 v25.01-rc1-2-ge01cb43b8 00:02:54.619 14:18:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:54.619 14:18:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:54.619 14:18:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:54.619 14:18:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:54.619 14:18:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:54.619 14:18:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.619 ************************************ 00:02:54.619 START TEST ubsan 00:02:54.619 ************************************ 00:02:54.619 using ubsan 00:02:54.619 14:18:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:54.619 00:02:54.619 real 0m0.000s 00:02:54.619 user 0m0.000s 00:02:54.619 sys 0m0.000s 00:02:54.619 14:18:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:54.619 ************************************ 00:02:54.619 END TEST ubsan 00:02:54.619 14:18:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.619 ************************************ 00:02:54.878 14:18:46 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:54.878 14:18:46 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:54.878 14:18:46 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:54.878 14:18:46 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:54.878 14:18:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:54.878 14:18:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.878 ************************************ 00:02:54.878 START TEST build_native_dpdk 00:02:54.878 ************************************ 00:02:54.878 14:18:46 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:54.878 caf0f5d395 version: 22.11.4 00:02:54.878 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:54.878 dc9c799c7d vhost: fix missing spinlock unlock 00:02:54.878 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:54.878 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:54.878 14:18:46 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:54.879 patching file config/rte_config.h 00:02:54.879 Hunk #1 succeeded at 60 (offset 1 line). 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:54.879 patching file lib/pcapng/rte_pcapng.c 00:02:54.879 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.879 14:18:46 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:54.879 14:18:46 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:00.243 The Meson build system 00:03:00.243 Version: 1.5.0 00:03:00.243 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:00.243 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:00.243 Build type: native build 00:03:00.243 Program cat found: YES (/usr/bin/cat) 00:03:00.243 Project name: DPDK 00:03:00.243 Project version: 22.11.4 00:03:00.243 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:00.243 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:00.243 Host machine cpu family: x86_64 00:03:00.243 Host machine cpu: x86_64 00:03:00.243 Message: ## Building in Developer Mode ## 00:03:00.243 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:00.243 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:00.243 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:00.243 Program objdump found: YES (/usr/bin/objdump) 00:03:00.244 Program python3 found: YES (/usr/bin/python3) 00:03:00.244 Program cat found: YES (/usr/bin/cat) 00:03:00.244 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:00.244 Checking for size of "void *" : 8 00:03:00.244 Checking for size of "void *" : 8 (cached) 00:03:00.244 Library m found: YES 00:03:00.244 Library numa found: YES 00:03:00.244 Has header "numaif.h" : YES 00:03:00.244 Library fdt found: NO 00:03:00.244 Library execinfo found: NO 00:03:00.244 Has header "execinfo.h" : YES 00:03:00.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:00.244 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:00.244 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:00.244 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:00.244 Run-time dependency openssl found: YES 3.1.1 00:03:00.244 Run-time dependency libpcap found: YES 1.10.4 00:03:00.244 Has header "pcap.h" with dependency libpcap: YES 00:03:00.244 Compiler for C supports arguments -Wcast-qual: YES 00:03:00.244 Compiler for C supports arguments -Wdeprecated: YES 00:03:00.244 Compiler for C supports arguments -Wformat: YES 00:03:00.244 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:00.244 Compiler for C supports arguments -Wformat-security: NO 00:03:00.244 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:00.244 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:00.244 Compiler for C supports arguments -Wnested-externs: YES 00:03:00.244 Compiler for C supports arguments -Wold-style-definition: YES 00:03:00.244 Compiler for C supports arguments -Wpointer-arith: YES 00:03:00.244 Compiler for C supports arguments -Wsign-compare: YES 00:03:00.244 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:00.244 Compiler for C supports arguments -Wundef: YES 00:03:00.244 Compiler for C supports arguments -Wwrite-strings: YES 00:03:00.244 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:00.244 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:00.244 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:00.244 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:00.244 Compiler for C supports arguments -mavx512f: YES 00:03:00.244 Checking if "AVX512 checking" compiles: YES 00:03:00.244 Fetching value of define "__SSE4_2__" : 1 00:03:00.244 Fetching value of define "__AES__" : 1 00:03:00.244 Fetching value of define "__AVX__" : 1 00:03:00.244 Fetching value of define "__AVX2__" : 1 00:03:00.244 Fetching value of define "__AVX512BW__" : (undefined) 00:03:00.244 Fetching value of define "__AVX512CD__" : (undefined) 00:03:00.244 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:00.244 Fetching value of define "__AVX512F__" : (undefined) 00:03:00.244 Fetching value of define "__AVX512VL__" : (undefined) 00:03:00.244 Fetching value of define "__PCLMUL__" : 1 00:03:00.244 Fetching value of define "__RDRND__" : 1 00:03:00.244 Fetching value of define "__RDSEED__" : 1 00:03:00.244 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:00.244 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:00.244 Message: lib/kvargs: Defining dependency "kvargs" 00:03:00.244 Message: lib/telemetry: Defining dependency "telemetry" 00:03:00.244 Checking for function "getentropy" : YES 00:03:00.244 Message: lib/eal: Defining dependency "eal" 00:03:00.244 Message: lib/ring: Defining dependency "ring" 00:03:00.244 Message: lib/rcu: Defining dependency "rcu" 00:03:00.244 Message: lib/mempool: Defining dependency "mempool" 00:03:00.244 Message: lib/mbuf: Defining dependency "mbuf" 00:03:00.244 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:00.244 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:00.244 Compiler for C supports arguments -mpclmul: YES 00:03:00.244 Compiler for C supports arguments -maes: YES 00:03:00.244 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:00.244 Compiler for C supports arguments -mavx512bw: YES 00:03:00.244 Compiler for C supports arguments -mavx512dq: YES 00:03:00.244 Compiler for C supports arguments -mavx512vl: YES 00:03:00.244 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:00.244 Compiler for C supports arguments -mavx2: YES 00:03:00.244 Compiler for C supports arguments -mavx: YES 00:03:00.244 Message: lib/net: Defining dependency "net" 00:03:00.244 Message: lib/meter: Defining dependency "meter" 00:03:00.244 Message: lib/ethdev: Defining dependency "ethdev" 00:03:00.244 Message: lib/pci: Defining dependency "pci" 00:03:00.244 Message: lib/cmdline: Defining dependency "cmdline" 00:03:00.244 Message: lib/metrics: Defining dependency "metrics" 00:03:00.244 Message: lib/hash: Defining dependency "hash" 00:03:00.244 Message: lib/timer: Defining dependency "timer" 00:03:00.244 Fetching value of define "__AVX2__" : 1 (cached) 00:03:00.244 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:00.244 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:00.244 Message: lib/acl: Defining dependency "acl" 00:03:00.244 Message: lib/bbdev: Defining dependency "bbdev" 00:03:00.244 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:00.244 Run-time dependency libelf found: YES 0.191 00:03:00.244 Message: lib/bpf: Defining dependency "bpf" 00:03:00.244 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:00.244 Message: lib/compressdev: Defining dependency "compressdev" 00:03:00.244 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:00.244 Message: lib/distributor: Defining dependency "distributor" 00:03:00.244 Message: lib/efd: Defining dependency "efd" 00:03:00.244 Message: lib/eventdev: Defining dependency "eventdev" 00:03:00.244 Message: lib/gpudev: Defining dependency "gpudev" 00:03:00.244 Message: lib/gro: Defining dependency "gro" 00:03:00.244 Message: lib/gso: Defining dependency "gso" 00:03:00.244 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:00.244 Message: lib/jobstats: Defining dependency "jobstats" 00:03:00.244 Message: lib/latencystats: Defining dependency "latencystats" 00:03:00.244 Message: lib/lpm: Defining dependency "lpm" 00:03:00.244 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:00.244 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:00.244 Message: lib/member: Defining dependency "member" 00:03:00.244 Message: lib/pcapng: Defining dependency "pcapng" 00:03:00.244 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:00.244 Message: lib/power: Defining dependency "power" 00:03:00.244 Message: lib/rawdev: Defining dependency "rawdev" 00:03:00.244 Message: lib/regexdev: Defining dependency "regexdev" 00:03:00.244 Message: lib/dmadev: Defining dependency "dmadev" 00:03:00.244 Message: lib/rib: Defining dependency "rib" 00:03:00.244 Message: lib/reorder: Defining dependency "reorder" 00:03:00.244 Message: lib/sched: Defining dependency "sched" 00:03:00.244 Message: lib/security: Defining dependency "security" 00:03:00.244 Message: lib/stack: Defining dependency "stack" 00:03:00.244 Has header "linux/userfaultfd.h" : YES 00:03:00.244 Message: lib/vhost: Defining dependency "vhost" 00:03:00.244 Message: lib/ipsec: Defining dependency "ipsec" 00:03:00.244 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:00.244 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:00.244 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:00.244 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:00.244 Message: lib/fib: Defining dependency "fib" 00:03:00.244 Message: lib/port: Defining dependency "port" 00:03:00.244 Message: lib/pdump: Defining dependency "pdump" 00:03:00.244 Message: lib/table: Defining dependency "table" 00:03:00.244 Message: lib/pipeline: Defining dependency "pipeline" 00:03:00.244 Message: lib/graph: Defining dependency "graph" 00:03:00.244 Message: lib/node: Defining dependency "node" 00:03:00.244 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:00.244 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:00.244 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:00.244 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:00.244 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:00.244 Compiler for C supports arguments -Wno-unused-value: YES 00:03:00.244 Compiler for C supports arguments -Wno-format: YES 00:03:00.244 Compiler for C supports arguments -Wno-format-security: YES 00:03:00.244 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:01.621 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:01.621 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:01.621 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:01.621 Fetching value of define "__AVX2__" : 1 (cached) 00:03:01.621 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.621 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.621 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:01.621 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:01.621 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:01.621 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:01.621 Configuring doxy-api.conf using configuration 00:03:01.621 Program sphinx-build found: NO 00:03:01.621 Configuring rte_build_config.h using configuration 00:03:01.621 Message: 00:03:01.621 ================= 00:03:01.621 Applications Enabled 00:03:01.621 ================= 00:03:01.621 00:03:01.621 apps: 00:03:01.621 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:01.621 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:01.621 test-security-perf, 00:03:01.621 00:03:01.621 Message: 00:03:01.621 ================= 00:03:01.621 Libraries Enabled 00:03:01.621 ================= 00:03:01.621 00:03:01.621 libs: 00:03:01.621 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:01.621 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:01.621 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:01.621 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:01.621 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:01.621 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:01.621 table, pipeline, graph, node, 00:03:01.621 00:03:01.621 Message: 00:03:01.621 =============== 00:03:01.621 Drivers Enabled 00:03:01.621 =============== 00:03:01.621 00:03:01.621 common: 00:03:01.621 00:03:01.621 bus: 00:03:01.621 pci, vdev, 00:03:01.621 mempool: 00:03:01.621 ring, 00:03:01.621 dma: 00:03:01.621 00:03:01.621 net: 00:03:01.621 i40e, 00:03:01.621 raw: 00:03:01.621 00:03:01.621 crypto: 00:03:01.621 00:03:01.621 compress: 00:03:01.621 00:03:01.621 regex: 00:03:01.621 00:03:01.621 vdpa: 00:03:01.621 00:03:01.621 event: 00:03:01.621 00:03:01.621 baseband: 00:03:01.621 00:03:01.621 gpu: 00:03:01.621 00:03:01.621 00:03:01.621 Message: 00:03:01.621 ================= 00:03:01.621 Content Skipped 00:03:01.621 ================= 00:03:01.621 00:03:01.621 apps: 00:03:01.621 00:03:01.621 libs: 00:03:01.621 kni: explicitly disabled via build config (deprecated lib) 00:03:01.622 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:01.622 00:03:01.622 drivers: 00:03:01.622 common/cpt: not in enabled drivers build config 00:03:01.622 common/dpaax: not in enabled drivers build config 00:03:01.622 common/iavf: not in enabled drivers build config 00:03:01.622 common/idpf: not in enabled drivers build config 00:03:01.622 common/mvep: not in enabled drivers build config 00:03:01.622 common/octeontx: not in enabled drivers build config 00:03:01.622 bus/auxiliary: not in enabled drivers build config 00:03:01.622 bus/dpaa: not in enabled drivers build config 00:03:01.622 bus/fslmc: not in enabled drivers build config 00:03:01.622 bus/ifpga: not in enabled drivers build config 00:03:01.622 bus/vmbus: not in enabled drivers build config 00:03:01.622 common/cnxk: not in enabled drivers build config 00:03:01.622 common/mlx5: not in enabled drivers build config 00:03:01.622 common/qat: not in enabled drivers build config 00:03:01.622 common/sfc_efx: not in enabled drivers build config 00:03:01.622 mempool/bucket: not in enabled drivers build config 00:03:01.622 mempool/cnxk: not in enabled drivers build config 00:03:01.622 mempool/dpaa: not in enabled drivers build config 00:03:01.622 mempool/dpaa2: not in enabled drivers build config 00:03:01.622 mempool/octeontx: not in enabled drivers build config 00:03:01.622 mempool/stack: not in enabled drivers build config 00:03:01.622 dma/cnxk: not in enabled drivers build config 00:03:01.622 dma/dpaa: not in enabled drivers build config 00:03:01.622 dma/dpaa2: not in enabled drivers build config 00:03:01.622 dma/hisilicon: not in enabled drivers build config 00:03:01.622 dma/idxd: not in enabled drivers build config 00:03:01.622 dma/ioat: not in enabled drivers build config 00:03:01.622 dma/skeleton: not in enabled drivers build config 00:03:01.622 net/af_packet: not in enabled drivers build config 00:03:01.622 net/af_xdp: not in enabled drivers build config 00:03:01.622 net/ark: not in enabled drivers build config 00:03:01.622 net/atlantic: not in enabled drivers build config 00:03:01.622 net/avp: not in enabled drivers build config 00:03:01.622 net/axgbe: not in enabled drivers build config 00:03:01.622 net/bnx2x: not in enabled drivers build config 00:03:01.622 net/bnxt: not in enabled drivers build config 00:03:01.622 net/bonding: not in enabled drivers build config 00:03:01.622 net/cnxk: not in enabled drivers build config 00:03:01.622 net/cxgbe: not in enabled drivers build config 00:03:01.622 net/dpaa: not in enabled drivers build config 00:03:01.622 net/dpaa2: not in enabled drivers build config 00:03:01.622 net/e1000: not in enabled drivers build config 00:03:01.622 net/ena: not in enabled drivers build config 00:03:01.622 net/enetc: not in enabled drivers build config 00:03:01.622 net/enetfec: not in enabled drivers build config 00:03:01.622 net/enic: not in enabled drivers build config 00:03:01.622 net/failsafe: not in enabled drivers build config 00:03:01.622 net/fm10k: not in enabled drivers build config 00:03:01.622 net/gve: not in enabled drivers build config 00:03:01.622 net/hinic: not in enabled drivers build config 00:03:01.622 net/hns3: not in enabled drivers build config 00:03:01.622 net/iavf: not in enabled drivers build config 00:03:01.622 net/ice: not in enabled drivers build config 00:03:01.622 net/idpf: not in enabled drivers build config 00:03:01.622 net/igc: not in enabled drivers build config 00:03:01.622 net/ionic: not in enabled drivers build config 00:03:01.622 net/ipn3ke: not in enabled drivers build config 00:03:01.622 net/ixgbe: not in enabled drivers build config 00:03:01.622 net/kni: not in enabled drivers build config 00:03:01.622 net/liquidio: not in enabled drivers build config 00:03:01.622 net/mana: not in enabled drivers build config 00:03:01.622 net/memif: not in enabled drivers build config 00:03:01.622 net/mlx4: not in enabled drivers build config 00:03:01.622 net/mlx5: not in enabled drivers build config 00:03:01.622 net/mvneta: not in enabled drivers build config 00:03:01.622 net/mvpp2: not in enabled drivers build config 00:03:01.622 net/netvsc: not in enabled drivers build config 00:03:01.622 net/nfb: not in enabled drivers build config 00:03:01.622 net/nfp: not in enabled drivers build config 00:03:01.622 net/ngbe: not in enabled drivers build config 00:03:01.622 net/null: not in enabled drivers build config 00:03:01.622 net/octeontx: not in enabled drivers build config 00:03:01.622 net/octeon_ep: not in enabled drivers build config 00:03:01.622 net/pcap: not in enabled drivers build config 00:03:01.622 net/pfe: not in enabled drivers build config 00:03:01.622 net/qede: not in enabled drivers build config 00:03:01.622 net/ring: not in enabled drivers build config 00:03:01.622 net/sfc: not in enabled drivers build config 00:03:01.622 net/softnic: not in enabled drivers build config 00:03:01.622 net/tap: not in enabled drivers build config 00:03:01.622 net/thunderx: not in enabled drivers build config 00:03:01.622 net/txgbe: not in enabled drivers build config 00:03:01.622 net/vdev_netvsc: not in enabled drivers build config 00:03:01.622 net/vhost: not in enabled drivers build config 00:03:01.622 net/virtio: not in enabled drivers build config 00:03:01.622 net/vmxnet3: not in enabled drivers build config 00:03:01.622 raw/cnxk_bphy: not in enabled drivers build config 00:03:01.622 raw/cnxk_gpio: not in enabled drivers build config 00:03:01.622 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:01.622 raw/ifpga: not in enabled drivers build config 00:03:01.622 raw/ntb: not in enabled drivers build config 00:03:01.622 raw/skeleton: not in enabled drivers build config 00:03:01.622 crypto/armv8: not in enabled drivers build config 00:03:01.622 crypto/bcmfs: not in enabled drivers build config 00:03:01.622 crypto/caam_jr: not in enabled drivers build config 00:03:01.622 crypto/ccp: not in enabled drivers build config 00:03:01.622 crypto/cnxk: not in enabled drivers build config 00:03:01.622 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.622 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.622 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.622 crypto/mlx5: not in enabled drivers build config 00:03:01.622 crypto/mvsam: not in enabled drivers build config 00:03:01.622 crypto/nitrox: not in enabled drivers build config 00:03:01.622 crypto/null: not in enabled drivers build config 00:03:01.622 crypto/octeontx: not in enabled drivers build config 00:03:01.622 crypto/openssl: not in enabled drivers build config 00:03:01.622 crypto/scheduler: not in enabled drivers build config 00:03:01.622 crypto/uadk: not in enabled drivers build config 00:03:01.622 crypto/virtio: not in enabled drivers build config 00:03:01.622 compress/isal: not in enabled drivers build config 00:03:01.622 compress/mlx5: not in enabled drivers build config 00:03:01.622 compress/octeontx: not in enabled drivers build config 00:03:01.622 compress/zlib: not in enabled drivers build config 00:03:01.622 regex/mlx5: not in enabled drivers build config 00:03:01.622 regex/cn9k: not in enabled drivers build config 00:03:01.622 vdpa/ifc: not in enabled drivers build config 00:03:01.622 vdpa/mlx5: not in enabled drivers build config 00:03:01.622 vdpa/sfc: not in enabled drivers build config 00:03:01.622 event/cnxk: not in enabled drivers build config 00:03:01.622 event/dlb2: not in enabled drivers build config 00:03:01.622 event/dpaa: not in enabled drivers build config 00:03:01.622 event/dpaa2: not in enabled drivers build config 00:03:01.622 event/dsw: not in enabled drivers build config 00:03:01.622 event/opdl: not in enabled drivers build config 00:03:01.622 event/skeleton: not in enabled drivers build config 00:03:01.622 event/sw: not in enabled drivers build config 00:03:01.622 event/octeontx: not in enabled drivers build config 00:03:01.622 baseband/acc: not in enabled drivers build config 00:03:01.622 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:01.622 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:01.622 baseband/la12xx: not in enabled drivers build config 00:03:01.622 baseband/null: not in enabled drivers build config 00:03:01.622 baseband/turbo_sw: not in enabled drivers build config 00:03:01.622 gpu/cuda: not in enabled drivers build config 00:03:01.622 00:03:01.622 00:03:01.622 Build targets in project: 314 00:03:01.622 00:03:01.622 DPDK 22.11.4 00:03:01.622 00:03:01.622 User defined options 00:03:01.622 libdir : lib 00:03:01.622 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:01.622 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:01.622 c_link_args : 00:03:01.622 enable_docs : false 00:03:01.622 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:01.622 enable_kmods : false 00:03:01.622 machine : native 00:03:01.622 tests : false 00:03:01.622 00:03:01.622 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.622 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:01.881 14:18:53 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:01.881 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:01.881 [1/743] Generating lib/rte_kvargs_def with a custom command 00:03:01.881 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:03:01.881 [3/743] Generating lib/rte_telemetry_def with a custom command 00:03:01.881 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:03:01.881 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:01.881 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:01.881 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:01.881 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.881 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.881 [10/743] Linking static target lib/librte_kvargs.a 00:03:01.881 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:02.139 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:02.139 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:02.139 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:02.139 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:02.139 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:02.139 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:02.139 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:02.139 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.139 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.398 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:02.398 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.398 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:02.398 [24/743] Linking target lib/librte_kvargs.so.23.0 00:03:02.398 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.398 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.398 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.398 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.398 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.398 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:02.656 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.656 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.656 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.656 [34/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.656 [35/743] Linking static target lib/librte_telemetry.a 00:03:02.656 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:02.656 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:02.656 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.656 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.656 [40/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:02.656 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.915 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.915 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.915 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.915 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.915 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.915 [47/743] Linking target lib/librte_telemetry.so.23.0 00:03:02.915 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:03.172 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.172 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.172 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:03.172 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.172 [53/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:03.172 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:03.173 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:03.173 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.173 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.173 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:03.173 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.173 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:03.173 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.173 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:03.173 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:03.173 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:03.173 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:03.431 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.431 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:03.431 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:03.431 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:03.431 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:03.431 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.431 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.431 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.431 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.431 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.431 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.431 [77/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:03.431 [78/743] Generating lib/rte_eal_def with a custom command 00:03:03.431 [79/743] Generating lib/rte_eal_mingw with a custom command 00:03:03.431 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.431 [81/743] Generating lib/rte_ring_def with a custom command 00:03:03.431 [82/743] Generating lib/rte_ring_mingw with a custom command 00:03:03.689 [83/743] Generating lib/rte_rcu_def with a custom command 00:03:03.689 [84/743] Generating lib/rte_rcu_mingw with a custom command 00:03:03.689 [85/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:03.689 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:03.689 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:03.689 [88/743] Linking static target lib/librte_ring.a 00:03:03.689 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:03.689 [90/743] Generating lib/rte_mempool_def with a custom command 00:03:03.689 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:03:03.947 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.947 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.947 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.205 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:04.205 [96/743] Linking static target lib/librte_eal.a 00:03:04.205 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.205 [98/743] Generating lib/rte_mbuf_def with a custom command 00:03:04.205 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.205 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:03:04.205 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.464 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.464 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.464 [104/743] Linking static target lib/librte_rcu.a 00:03:04.464 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.722 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.722 [107/743] Linking static target lib/librte_mempool.a 00:03:04.722 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.722 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.722 [110/743] Generating lib/rte_net_def with a custom command 00:03:04.722 [111/743] Generating lib/rte_net_mingw with a custom command 00:03:04.722 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:04.980 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:04.980 [114/743] Generating lib/rte_meter_def with a custom command 00:03:04.981 [115/743] Generating lib/rte_meter_mingw with a custom command 00:03:04.981 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:04.981 [117/743] Linking static target lib/librte_meter.a 00:03:04.981 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:04.981 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.239 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:05.239 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.239 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.239 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.239 [124/743] Linking static target lib/librte_mbuf.a 00:03:05.239 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:05.239 [126/743] Linking static target lib/librte_net.a 00:03:05.497 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.497 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.755 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:05.755 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:05.755 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:05.755 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.013 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:06.013 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.013 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.577 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.577 [137/743] Generating lib/rte_ethdev_def with a custom command 00:03:06.577 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.577 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:03:06.577 [140/743] Generating lib/rte_pci_def with a custom command 00:03:06.577 [141/743] Generating lib/rte_pci_mingw with a custom command 00:03:06.577 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.578 [143/743] Linking static target lib/librte_pci.a 00:03:06.578 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.835 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.835 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.835 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.835 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:06.835 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.835 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.835 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:07.093 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:07.093 [153/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.093 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:07.093 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:07.093 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:07.093 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:07.093 [158/743] Generating lib/rte_cmdline_def with a custom command 00:03:07.093 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:03:07.093 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:07.093 [161/743] Generating lib/rte_metrics_def with a custom command 00:03:07.093 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:03:07.351 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:07.351 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.351 [165/743] Generating lib/rte_hash_def with a custom command 00:03:07.351 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.351 [167/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.351 [168/743] Generating lib/rte_hash_mingw with a custom command 00:03:07.351 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:07.351 [170/743] Generating lib/rte_timer_def with a custom command 00:03:07.351 [171/743] Generating lib/rte_timer_mingw with a custom command 00:03:07.351 [172/743] Linking static target lib/librte_cmdline.a 00:03:07.351 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.918 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:07.918 [175/743] Linking static target lib/librte_metrics.a 00:03:07.918 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.918 [177/743] Linking static target lib/librte_timer.a 00:03:08.176 [178/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:08.176 [179/743] Linking static target lib/librte_ethdev.a 00:03:08.176 [180/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.434 [181/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.434 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:08.434 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.434 [184/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:08.999 [185/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:08.999 [186/743] Generating lib/rte_acl_def with a custom command 00:03:08.999 [187/743] Generating lib/rte_acl_mingw with a custom command 00:03:08.999 [188/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:08.999 [189/743] Generating lib/rte_bbdev_def with a custom command 00:03:08.999 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:03:09.257 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:09.257 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:03:09.257 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:03:09.515 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:09.772 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:09.772 [196/743] Linking static target lib/librte_bitratestats.a 00:03:09.772 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:10.030 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.030 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:10.030 [200/743] Linking static target lib/librte_bbdev.a 00:03:10.030 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:10.595 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.595 [203/743] Linking static target lib/librte_hash.a 00:03:10.595 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:10.595 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:03:10.595 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:10.595 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.595 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:10.853 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:11.140 [210/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:11.140 [211/743] Generating lib/rte_bpf_def with a custom command 00:03:11.140 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:11.140 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:03:11.140 [214/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.140 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:03:11.140 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:03:11.421 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:11.421 [218/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:11.421 [219/743] Linking static target lib/librte_acl.a 00:03:11.421 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:11.421 [221/743] Linking static target lib/librte_cfgfile.a 00:03:11.421 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:11.678 [223/743] Generating lib/rte_compressdev_def with a custom command 00:03:11.678 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:03:11.678 [225/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.678 [226/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.678 [227/743] Linking target lib/librte_eal.so.23.0 00:03:11.678 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.678 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:11.937 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:11.937 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:03:11.937 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:03:11.937 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:11.937 [234/743] Linking target lib/librte_ring.so.23.0 00:03:11.937 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:11.937 [236/743] Linking target lib/librte_meter.so.23.0 00:03:11.937 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:11.937 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:11.937 [239/743] Linking target lib/librte_pci.so.23.0 00:03:11.937 [240/743] Linking target lib/librte_rcu.so.23.0 00:03:12.194 [241/743] Linking target lib/librte_mempool.so.23.0 00:03:12.194 [242/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:12.194 [243/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:12.194 [244/743] Linking target lib/librte_timer.so.23.0 00:03:12.194 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:12.194 [246/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:12.195 [247/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:12.195 [248/743] Linking target lib/librte_acl.so.23.0 00:03:12.195 [249/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:12.195 [250/743] Linking static target lib/librte_bpf.a 00:03:12.195 [251/743] Linking target lib/librte_mbuf.so.23.0 00:03:12.195 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:12.195 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:03:12.195 [254/743] Linking static target lib/librte_compressdev.a 00:03:12.452 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:12.452 [256/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:12.452 [257/743] Generating lib/rte_distributor_def with a custom command 00:03:12.452 [258/743] Linking target lib/librte_bbdev.so.23.0 00:03:12.452 [259/743] Linking target lib/librte_net.so.23.0 00:03:12.452 [260/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:12.452 [261/743] Generating lib/rte_distributor_mingw with a custom command 00:03:12.452 [262/743] Generating lib/rte_efd_def with a custom command 00:03:12.452 [263/743] Generating lib/rte_efd_mingw with a custom command 00:03:12.452 [264/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.710 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:12.710 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:12.710 [267/743] Linking target lib/librte_cmdline.so.23.0 00:03:12.710 [268/743] Linking target lib/librte_hash.so.23.0 00:03:12.710 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:12.968 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:12.968 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:12.968 [272/743] Linking static target lib/librte_distributor.a 00:03:12.968 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.968 [274/743] Linking target lib/librte_ethdev.so.23.0 00:03:13.225 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.225 [276/743] Linking target lib/librte_distributor.so.23.0 00:03:13.225 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:13.225 [278/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:13.225 [279/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.225 [280/743] Linking target lib/librte_metrics.so.23.0 00:03:13.225 [281/743] Linking target lib/librte_bpf.so.23.0 00:03:13.225 [282/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:13.483 [283/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:13.483 [284/743] Linking target lib/librte_compressdev.so.23.0 00:03:13.483 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:13.483 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:03:13.483 [287/743] Generating lib/rte_eventdev_def with a custom command 00:03:13.483 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:03:13.483 [289/743] Generating lib/rte_gpudev_def with a custom command 00:03:13.483 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:03:13.741 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:14.000 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:14.000 [293/743] Linking static target lib/librte_efd.a 00:03:14.258 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:14.258 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:14.258 [296/743] Linking static target lib/librte_cryptodev.a 00:03:14.258 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:14.258 [298/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:14.258 [299/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:14.258 [300/743] Linking static target lib/librte_gpudev.a 00:03:14.258 [301/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.516 [302/743] Linking target lib/librte_efd.so.23.0 00:03:14.516 [303/743] Generating lib/rte_gro_def with a custom command 00:03:14.516 [304/743] Generating lib/rte_gro_mingw with a custom command 00:03:14.516 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:14.516 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:14.878 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:14.878 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:15.137 [309/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.137 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:15.137 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:15.137 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:15.137 [313/743] Linking target lib/librte_gpudev.so.23.0 00:03:15.137 [314/743] Linking static target lib/librte_gro.a 00:03:15.137 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:15.137 [316/743] Generating lib/rte_gso_def with a custom command 00:03:15.137 [317/743] Generating lib/rte_gso_mingw with a custom command 00:03:15.395 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:15.395 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.395 [320/743] Linking target lib/librte_gro.so.23.0 00:03:15.653 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:15.653 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:15.653 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:03:15.653 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:03:15.653 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:15.653 [326/743] Linking static target lib/librte_eventdev.a 00:03:15.911 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:15.911 [328/743] Linking static target lib/librte_jobstats.a 00:03:15.911 [329/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:15.911 [330/743] Linking static target lib/librte_gso.a 00:03:15.911 [331/743] Generating lib/rte_jobstats_def with a custom command 00:03:15.911 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:03:15.911 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:15.911 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.168 [335/743] Linking target lib/librte_gso.so.23.0 00:03:16.168 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:16.168 [337/743] Generating lib/rte_latencystats_def with a custom command 00:03:16.168 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:03:16.168 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:16.168 [340/743] Generating lib/rte_lpm_def with a custom command 00:03:16.168 [341/743] Generating lib/rte_lpm_mingw with a custom command 00:03:16.168 [342/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.168 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:16.168 [344/743] Linking target lib/librte_jobstats.so.23.0 00:03:16.425 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:16.425 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.425 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:16.425 [348/743] Linking static target lib/librte_ip_frag.a 00:03:16.425 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:03:16.683 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:16.683 [351/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:16.683 [352/743] Linking static target lib/librte_latencystats.a 00:03:16.683 [353/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.940 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:03:16.940 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:16.940 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:16.940 [357/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:16.940 [358/743] Generating lib/rte_member_def with a custom command 00:03:16.940 [359/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.940 [360/743] Generating lib/rte_member_mingw with a custom command 00:03:16.940 [361/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:16.940 [362/743] Generating lib/rte_pcapng_def with a custom command 00:03:16.940 [363/743] Linking target lib/librte_latencystats.so.23.0 00:03:16.940 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:16.940 [365/743] Generating lib/rte_pcapng_mingw with a custom command 00:03:17.198 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.198 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.198 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.198 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.455 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:17.455 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:17.455 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:17.455 [373/743] Linking static target lib/librte_lpm.a 00:03:17.711 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:17.712 [375/743] Generating lib/rte_power_def with a custom command 00:03:17.712 [376/743] Generating lib/rte_power_mingw with a custom command 00:03:17.712 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.712 [378/743] Linking target lib/librte_eventdev.so.23.0 00:03:17.712 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.712 [380/743] Generating lib/rte_rawdev_def with a custom command 00:03:17.712 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:17.969 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.969 [383/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:17.969 [384/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.969 [385/743] Generating lib/rte_regexdev_def with a custom command 00:03:17.969 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:17.969 [387/743] Linking target lib/librte_lpm.so.23.0 00:03:17.969 [388/743] Generating lib/rte_dmadev_def with a custom command 00:03:17.969 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:17.969 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:17.969 [391/743] Linking static target lib/librte_pcapng.a 00:03:17.969 [392/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:17.969 [393/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:17.969 [394/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.969 [395/743] Linking static target lib/librte_rawdev.a 00:03:17.969 [396/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:17.969 [397/743] Generating lib/rte_rib_def with a custom command 00:03:18.226 [398/743] Generating lib/rte_rib_mingw with a custom command 00:03:18.226 [399/743] Generating lib/rte_reorder_def with a custom command 00:03:18.226 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:03:18.226 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.226 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:18.484 [403/743] Linking static target lib/librte_dmadev.a 00:03:18.484 [404/743] Linking target lib/librte_pcapng.so.23.0 00:03:18.484 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.484 [406/743] Linking static target lib/librte_power.a 00:03:18.484 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:18.484 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.484 [409/743] Linking target lib/librte_rawdev.so.23.0 00:03:18.741 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:18.741 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:18.741 [412/743] Linking static target lib/librte_regexdev.a 00:03:18.741 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:18.741 [414/743] Linking static target lib/librte_member.a 00:03:18.741 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:18.741 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:18.741 [417/743] Generating lib/rte_sched_mingw with a custom command 00:03:18.741 [418/743] Generating lib/rte_sched_def with a custom command 00:03:18.741 [419/743] Generating lib/rte_security_def with a custom command 00:03:18.741 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:18.741 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:18.998 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.998 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:18.998 [424/743] Linking target lib/librte_dmadev.so.23.0 00:03:18.998 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:18.998 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.998 [427/743] Linking static target lib/librte_reorder.a 00:03:18.998 [428/743] Generating lib/rte_stack_def with a custom command 00:03:18.998 [429/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.998 [430/743] Generating lib/rte_stack_mingw with a custom command 00:03:18.998 [431/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:18.998 [432/743] Linking static target lib/librte_stack.a 00:03:18.998 [433/743] Linking target lib/librte_member.so.23.0 00:03:18.998 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:19.256 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:19.256 [436/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.256 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.256 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:19.256 [439/743] Linking target lib/librte_reorder.so.23.0 00:03:19.256 [440/743] Linking static target lib/librte_rib.a 00:03:19.256 [441/743] Linking target lib/librte_stack.so.23.0 00:03:19.256 [442/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.513 [443/743] Linking target lib/librte_regexdev.so.23.0 00:03:19.513 [444/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.513 [445/743] Linking target lib/librte_power.so.23.0 00:03:19.513 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.513 [447/743] Linking static target lib/librte_security.a 00:03:19.770 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.770 [449/743] Linking target lib/librte_rib.so.23.0 00:03:19.770 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.770 [451/743] Generating lib/rte_vhost_def with a custom command 00:03:19.771 [452/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:19.771 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:03:20.027 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.028 [455/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.028 [456/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.028 [457/743] Linking target lib/librte_security.so.23.0 00:03:20.285 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:20.285 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:20.285 [460/743] Linking static target lib/librte_sched.a 00:03:20.542 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.542 [462/743] Linking target lib/librte_sched.so.23.0 00:03:20.542 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:20.799 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.799 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:20.799 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:20.799 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:20.799 [468/743] Generating lib/rte_ipsec_def with a custom command 00:03:21.057 [469/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:21.057 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.057 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:21.315 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:21.315 [473/743] Generating lib/rte_fib_def with a custom command 00:03:21.315 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:21.315 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:21.572 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:21.572 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:21.572 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:21.572 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:21.829 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:21.829 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:21.829 [482/743] Linking static target lib/librte_ipsec.a 00:03:22.086 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.086 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:22.086 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:22.344 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:22.344 [487/743] Linking static target lib/librte_fib.a 00:03:22.344 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:22.344 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.601 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:22.601 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:22.601 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.601 [493/743] Linking target lib/librte_fib.so.23.0 00:03:22.859 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:23.426 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.426 [496/743] Generating lib/rte_port_def with a custom command 00:03:23.426 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:23.426 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.427 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.427 [500/743] Generating lib/rte_pdump_def with a custom command 00:03:23.427 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:03:23.427 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.427 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.684 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:23.684 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.684 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:23.684 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:23.941 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:23.941 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:23.941 [510/743] Linking static target lib/librte_port.a 00:03:24.199 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:24.456 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:24.456 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:24.456 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.456 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:24.456 [516/743] Linking target lib/librte_port.so.23.0 00:03:24.713 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:24.713 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:24.713 [519/743] Linking static target lib/librte_pdump.a 00:03:24.713 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:24.971 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.971 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:24.971 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:25.228 [524/743] Generating lib/rte_table_def with a custom command 00:03:25.228 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:25.228 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:25.228 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:25.486 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:25.486 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:25.486 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:25.743 [531/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:25.743 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:25.743 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:25.743 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:25.743 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:26.000 [536/743] Linking static target lib/librte_table.a 00:03:26.000 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:26.258 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:26.515 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:26.515 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.515 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:26.515 [542/743] Linking target lib/librte_table.so.23.0 00:03:26.515 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:26.773 [544/743] Generating lib/rte_graph_def with a custom command 00:03:26.773 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:26.773 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:26.773 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:26.773 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:27.365 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:27.365 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:27.365 [551/743] Linking static target lib/librte_graph.a 00:03:27.365 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:27.365 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:27.623 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:27.623 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:27.881 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:27.881 [557/743] Generating lib/rte_node_def with a custom command 00:03:28.139 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:28.139 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.139 [560/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:28.139 [561/743] Linking target lib/librte_graph.so.23.0 00:03:28.139 [562/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:28.139 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:28.139 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:28.139 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:28.398 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.398 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:28.398 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:28.398 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:28.398 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:28.398 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:28.398 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:28.398 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:28.398 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:28.398 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:28.657 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:28.657 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:28.657 [578/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:28.657 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:28.657 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:28.657 [581/743] Linking static target lib/librte_node.a 00:03:28.915 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:28.915 [583/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:28.915 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.915 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.915 [586/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:28.915 [587/743] Linking static target drivers/librte_bus_vdev.a 00:03:28.915 [588/743] Linking target lib/librte_node.so.23.0 00:03:28.915 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.174 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:29.174 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.174 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.174 [593/743] Linking static target drivers/librte_bus_pci.a 00:03:29.174 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.174 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:29.432 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:29.432 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:29.432 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.432 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:29.432 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:29.690 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:29.690 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:29.690 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:29.690 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:29.948 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:29.948 [606/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:29.948 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.948 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.948 [609/743] Linking static target drivers/librte_mempool_ring.a 00:03:29.948 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:30.515 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:30.773 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:30.773 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:31.032 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:31.291 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:31.549 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:31.549 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:31.807 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:32.066 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:32.066 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:32.324 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:32.324 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:32.324 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:32.324 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:32.324 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:33.258 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:33.516 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:33.774 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:33.774 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:33.774 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:33.774 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:33.774 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:33.774 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:34.032 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:34.033 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:34.291 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:34.550 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:34.550 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:34.550 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:35.116 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:35.116 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:35.116 [642/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:35.116 [643/743] Linking static target drivers/librte_net_i40e.a 00:03:35.116 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:35.116 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:35.116 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.375 [647/743] Linking static target lib/librte_vhost.a 00:03:35.375 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:35.375 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:35.375 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:35.633 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.633 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:35.633 [653/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:35.892 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:35.892 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:36.150 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:36.409 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:36.409 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.409 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:36.667 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:36.667 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:36.667 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:36.667 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:36.667 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:36.925 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:36.925 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:37.184 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:37.184 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:37.184 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:37.442 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:37.442 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:37.701 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:37.701 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:38.267 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:38.525 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:38.525 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:38.525 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:38.784 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:38.784 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:38.784 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:39.042 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:39.042 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:39.302 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:39.302 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:39.560 [685/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:39.560 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:39.560 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:39.560 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:40.126 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:40.126 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:40.126 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:40.126 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:40.126 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:40.126 [694/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:40.693 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:40.693 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:40.693 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:40.951 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:40.951 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:41.209 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:41.209 [701/743] Linking static target lib/librte_pipeline.a 00:03:41.467 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:41.467 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:41.725 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:41.725 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:41.984 [706/743] Linking target app/dpdk-pdump 00:03:41.984 [707/743] Linking target app/dpdk-dumpcap 00:03:41.984 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:41.984 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:41.984 [710/743] Linking target app/dpdk-proc-info 00:03:42.242 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:42.242 [712/743] Linking target app/dpdk-test-acl 00:03:42.242 [713/743] Linking target app/dpdk-test-bbdev 00:03:42.242 [714/743] Linking target app/dpdk-test-cmdline 00:03:42.500 [715/743] Linking target app/dpdk-test-compress-perf 00:03:42.500 [716/743] Linking target app/dpdk-test-crypto-perf 00:03:42.500 [717/743] Linking target app/dpdk-test-eventdev 00:03:42.500 [718/743] Linking target app/dpdk-test-fib 00:03:42.758 [719/743] Linking target app/dpdk-test-flow-perf 00:03:42.758 [720/743] Linking target app/dpdk-test-gpudev 00:03:42.758 [721/743] Linking target app/dpdk-test-pipeline 00:03:43.323 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:43.323 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:43.323 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:43.581 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:43.581 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:43.581 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:43.840 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.098 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:44.098 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:44.356 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:44.356 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:44.356 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:44.356 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:44.614 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:44.614 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:44.939 [737/743] Linking target app/dpdk-test-sad 00:03:44.939 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:44.939 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:45.198 [740/743] Linking target app/dpdk-test-regex 00:03:45.456 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:45.456 [742/743] Linking target app/dpdk-testpmd 00:03:46.023 [743/743] Linking target app/dpdk-test-security-perf 00:03:46.023 14:19:37 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:46.023 14:19:37 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:46.023 14:19:37 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:46.023 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:46.023 [0/1] Installing files. 00:03:46.284 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.284 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.285 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.286 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.287 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.288 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.547 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:46.548 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:46.548 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.548 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:46.549 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:46.549 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:46.549 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.549 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:46.549 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.549 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.810 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.811 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.812 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:46.813 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:46.813 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:46.813 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:46.813 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:46.813 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:46.813 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:46.813 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:46.813 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:46.813 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:46.813 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:46.813 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:46.813 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:46.813 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:46.813 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:46.813 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:46.813 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:46.813 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:46.813 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:46.813 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:46.813 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:46.813 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:46.813 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:46.813 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:46.813 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:46.813 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:46.813 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:46.813 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:46.813 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:46.813 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:46.813 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:46.813 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:46.813 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:46.813 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:46.813 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:46.813 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:46.813 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:46.813 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:46.813 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:46.813 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:46.813 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:46.813 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:46.813 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:46.813 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:46.813 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:46.813 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:46.814 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:46.814 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:46.814 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:46.814 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:46.814 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:46.814 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:46.814 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:46.814 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:46.814 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:46.814 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:46.814 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:46.814 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:46.814 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:46.814 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:46.814 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:46.814 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:46.814 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:46.814 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:46.814 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:46.814 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:46.814 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:46.814 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:46.814 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:46.814 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:46.814 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:46.814 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:46.814 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:46.814 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:46.814 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:46.814 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:46.814 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:46.814 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:46.814 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:46.814 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:46.814 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:46.814 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:46.814 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:46.814 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:46.814 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:46.814 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:46.814 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:46.814 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:46.814 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:46.814 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:46.814 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:46.814 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:46.814 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:46.814 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:46.814 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:46.814 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:46.814 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:46.814 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:46.814 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:46.814 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:46.814 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:46.814 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:46.814 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:46.814 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:46.814 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:46.814 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:46.814 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:46.814 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:46.814 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:46.814 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:46.814 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:46.814 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:46.814 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:46.814 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:46.814 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:46.814 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:46.814 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:46.814 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:46.814 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:46.814 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:46.814 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:46.814 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:46.814 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:46.814 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:46.814 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:46.814 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:46.814 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:47.073 14:19:39 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:47.073 14:19:39 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:47.073 00:03:47.073 real 0m52.189s 00:03:47.073 user 6m9.825s 00:03:47.073 sys 0m55.871s 00:03:47.073 14:19:39 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:47.073 ************************************ 00:03:47.073 END TEST build_native_dpdk 00:03:47.073 14:19:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:47.073 ************************************ 00:03:47.073 14:19:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:47.073 14:19:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:47.073 14:19:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:47.073 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:47.332 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.332 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:47.332 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:47.590 Using 'verbs' RDMA provider 00:04:00.730 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:15.612 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:15.612 Creating mk/config.mk...done. 00:04:15.612 Creating mk/cc.flags.mk...done. 00:04:15.612 Type 'make' to build. 00:04:15.612 14:20:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:15.612 14:20:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:15.612 14:20:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:15.612 14:20:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:15.612 ************************************ 00:04:15.612 START TEST make 00:04:15.612 ************************************ 00:04:15.612 14:20:05 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:11.836 CC lib/log/log.o 00:05:11.836 CC lib/log/log_flags.o 00:05:11.836 CC lib/log/log_deprecated.o 00:05:11.836 CC lib/ut_mock/mock.o 00:05:11.836 CC lib/ut/ut.o 00:05:11.836 LIB libspdk_log.a 00:05:11.836 LIB libspdk_ut.a 00:05:11.836 LIB libspdk_ut_mock.a 00:05:11.836 SO libspdk_ut.so.2.0 00:05:11.836 SO libspdk_ut_mock.so.6.0 00:05:11.836 SO libspdk_log.so.7.1 00:05:11.836 SYMLINK libspdk_ut.so 00:05:11.836 SYMLINK libspdk_ut_mock.so 00:05:11.836 SYMLINK libspdk_log.so 00:05:11.836 CXX lib/trace_parser/trace.o 00:05:11.836 CC lib/ioat/ioat.o 00:05:11.836 CC lib/util/bit_array.o 00:05:11.836 CC lib/util/base64.o 00:05:11.836 CC lib/util/cpuset.o 00:05:11.836 CC lib/util/crc16.o 00:05:11.836 CC lib/util/crc32.o 00:05:11.836 CC lib/util/crc32c.o 00:05:11.836 CC lib/dma/dma.o 00:05:11.836 CC lib/vfio_user/host/vfio_user_pci.o 00:05:11.836 CC lib/util/crc32_ieee.o 00:05:11.836 CC lib/util/crc64.o 00:05:11.836 CC lib/util/dif.o 00:05:11.836 CC lib/util/fd.o 00:05:11.836 LIB libspdk_dma.a 00:05:11.836 CC lib/util/fd_group.o 00:05:11.836 CC lib/vfio_user/host/vfio_user.o 00:05:11.836 SO libspdk_dma.so.5.0 00:05:11.836 CC lib/util/file.o 00:05:11.836 CC lib/util/hexlify.o 00:05:11.836 SYMLINK libspdk_dma.so 00:05:11.836 LIB libspdk_ioat.a 00:05:11.836 CC lib/util/iov.o 00:05:11.836 SO libspdk_ioat.so.7.0 00:05:11.836 CC lib/util/math.o 00:05:11.836 SYMLINK libspdk_ioat.so 00:05:11.836 CC lib/util/net.o 00:05:11.836 CC lib/util/pipe.o 00:05:11.836 CC lib/util/strerror_tls.o 00:05:11.836 CC lib/util/string.o 00:05:11.836 LIB libspdk_vfio_user.a 00:05:11.836 SO libspdk_vfio_user.so.5.0 00:05:11.836 CC lib/util/uuid.o 00:05:11.836 CC lib/util/xor.o 00:05:11.836 CC lib/util/zipf.o 00:05:11.836 CC lib/util/md5.o 00:05:11.836 SYMLINK libspdk_vfio_user.so 00:05:11.836 LIB libspdk_util.a 00:05:11.836 SO libspdk_util.so.10.1 00:05:11.836 LIB libspdk_trace_parser.a 00:05:11.836 SYMLINK libspdk_util.so 00:05:11.836 SO libspdk_trace_parser.so.6.0 00:05:11.836 SYMLINK libspdk_trace_parser.so 00:05:11.836 CC lib/env_dpdk/env.o 00:05:11.836 CC lib/env_dpdk/pci.o 00:05:11.836 CC lib/env_dpdk/memory.o 00:05:11.836 CC lib/env_dpdk/threads.o 00:05:11.836 CC lib/env_dpdk/init.o 00:05:11.836 CC lib/json/json_parse.o 00:05:11.836 CC lib/idxd/idxd.o 00:05:11.836 CC lib/rdma_utils/rdma_utils.o 00:05:11.837 CC lib/vmd/vmd.o 00:05:11.837 CC lib/conf/conf.o 00:05:11.837 CC lib/env_dpdk/pci_ioat.o 00:05:11.837 LIB libspdk_conf.a 00:05:11.837 CC lib/json/json_util.o 00:05:11.837 SO libspdk_conf.so.6.0 00:05:11.837 LIB libspdk_rdma_utils.a 00:05:11.837 CC lib/idxd/idxd_user.o 00:05:11.837 SO libspdk_rdma_utils.so.1.0 00:05:11.837 SYMLINK libspdk_conf.so 00:05:11.837 CC lib/idxd/idxd_kernel.o 00:05:11.837 CC lib/env_dpdk/pci_virtio.o 00:05:11.837 CC lib/env_dpdk/pci_vmd.o 00:05:11.837 SYMLINK libspdk_rdma_utils.so 00:05:11.837 CC lib/env_dpdk/pci_idxd.o 00:05:11.837 CC lib/vmd/led.o 00:05:11.837 CC lib/json/json_write.o 00:05:11.837 CC lib/env_dpdk/pci_event.o 00:05:11.837 CC lib/env_dpdk/sigbus_handler.o 00:05:11.837 CC lib/env_dpdk/pci_dpdk.o 00:05:11.837 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:11.837 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:11.837 LIB libspdk_idxd.a 00:05:11.837 SO libspdk_idxd.so.12.1 00:05:11.837 LIB libspdk_vmd.a 00:05:11.837 CC lib/rdma_provider/common.o 00:05:11.837 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:11.837 SYMLINK libspdk_idxd.so 00:05:11.837 SO libspdk_vmd.so.6.0 00:05:11.837 SYMLINK libspdk_vmd.so 00:05:11.837 LIB libspdk_json.a 00:05:11.837 SO libspdk_json.so.6.0 00:05:11.837 SYMLINK libspdk_json.so 00:05:11.837 LIB libspdk_rdma_provider.a 00:05:11.837 SO libspdk_rdma_provider.so.7.0 00:05:11.837 SYMLINK libspdk_rdma_provider.so 00:05:11.837 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:11.837 CC lib/jsonrpc/jsonrpc_client.o 00:05:11.837 CC lib/jsonrpc/jsonrpc_server.o 00:05:11.837 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:11.837 LIB libspdk_env_dpdk.a 00:05:11.837 LIB libspdk_jsonrpc.a 00:05:11.837 SO libspdk_jsonrpc.so.6.0 00:05:11.837 SO libspdk_env_dpdk.so.15.1 00:05:11.837 SYMLINK libspdk_jsonrpc.so 00:05:11.837 SYMLINK libspdk_env_dpdk.so 00:05:11.837 CC lib/rpc/rpc.o 00:05:11.837 LIB libspdk_rpc.a 00:05:11.837 SO libspdk_rpc.so.6.0 00:05:11.837 SYMLINK libspdk_rpc.so 00:05:11.837 CC lib/keyring/keyring.o 00:05:11.837 CC lib/keyring/keyring_rpc.o 00:05:11.837 CC lib/notify/notify.o 00:05:11.837 CC lib/trace/trace.o 00:05:11.837 CC lib/notify/notify_rpc.o 00:05:11.837 CC lib/trace/trace_flags.o 00:05:11.837 CC lib/trace/trace_rpc.o 00:05:11.837 LIB libspdk_notify.a 00:05:11.837 SO libspdk_notify.so.6.0 00:05:11.837 LIB libspdk_keyring.a 00:05:11.837 SYMLINK libspdk_notify.so 00:05:11.837 LIB libspdk_trace.a 00:05:11.837 SO libspdk_keyring.so.2.0 00:05:11.837 SO libspdk_trace.so.11.0 00:05:11.837 SYMLINK libspdk_keyring.so 00:05:11.837 SYMLINK libspdk_trace.so 00:05:11.837 CC lib/thread/iobuf.o 00:05:11.837 CC lib/thread/thread.o 00:05:11.837 CC lib/sock/sock.o 00:05:11.837 CC lib/sock/sock_rpc.o 00:05:11.837 LIB libspdk_sock.a 00:05:12.096 SO libspdk_sock.so.10.0 00:05:12.096 SYMLINK libspdk_sock.so 00:05:12.355 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:12.355 CC lib/nvme/nvme_ctrlr.o 00:05:12.355 CC lib/nvme/nvme_fabric.o 00:05:12.355 CC lib/nvme/nvme_pcie_common.o 00:05:12.355 CC lib/nvme/nvme_ns_cmd.o 00:05:12.355 CC lib/nvme/nvme_ns.o 00:05:12.355 CC lib/nvme/nvme_pcie.o 00:05:12.355 CC lib/nvme/nvme.o 00:05:12.355 CC lib/nvme/nvme_qpair.o 00:05:13.290 LIB libspdk_thread.a 00:05:13.290 CC lib/nvme/nvme_quirks.o 00:05:13.290 SO libspdk_thread.so.11.0 00:05:13.290 CC lib/nvme/nvme_transport.o 00:05:13.290 CC lib/nvme/nvme_discovery.o 00:05:13.290 SYMLINK libspdk_thread.so 00:05:13.290 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:13.290 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:13.290 CC lib/nvme/nvme_tcp.o 00:05:13.290 CC lib/nvme/nvme_opal.o 00:05:13.549 CC lib/nvme/nvme_io_msg.o 00:05:13.549 CC lib/nvme/nvme_poll_group.o 00:05:13.837 CC lib/nvme/nvme_zns.o 00:05:13.837 CC lib/nvme/nvme_stubs.o 00:05:13.837 CC lib/nvme/nvme_auth.o 00:05:13.837 CC lib/nvme/nvme_cuse.o 00:05:14.100 CC lib/accel/accel.o 00:05:14.100 CC lib/nvme/nvme_rdma.o 00:05:14.100 CC lib/blob/blobstore.o 00:05:14.100 CC lib/blob/request.o 00:05:14.359 CC lib/blob/zeroes.o 00:05:14.359 CC lib/accel/accel_rpc.o 00:05:14.618 CC lib/blob/blob_bs_dev.o 00:05:14.618 CC lib/accel/accel_sw.o 00:05:14.876 CC lib/init/subsystem.o 00:05:14.876 CC lib/init/json_config.o 00:05:14.876 CC lib/init/subsystem_rpc.o 00:05:14.876 CC lib/init/rpc.o 00:05:14.876 CC lib/virtio/virtio.o 00:05:14.876 CC lib/virtio/virtio_vhost_user.o 00:05:14.876 CC lib/fsdev/fsdev.o 00:05:15.135 CC lib/fsdev/fsdev_io.o 00:05:15.135 CC lib/virtio/virtio_vfio_user.o 00:05:15.135 CC lib/virtio/virtio_pci.o 00:05:15.135 LIB libspdk_init.a 00:05:15.135 LIB libspdk_accel.a 00:05:15.135 CC lib/fsdev/fsdev_rpc.o 00:05:15.135 SO libspdk_init.so.6.0 00:05:15.135 SO libspdk_accel.so.16.0 00:05:15.393 SYMLINK libspdk_init.so 00:05:15.393 SYMLINK libspdk_accel.so 00:05:15.393 LIB libspdk_virtio.a 00:05:15.393 SO libspdk_virtio.so.7.0 00:05:15.393 SYMLINK libspdk_virtio.so 00:05:15.652 CC lib/event/app.o 00:05:15.652 CC lib/event/reactor.o 00:05:15.652 CC lib/event/log_rpc.o 00:05:15.652 CC lib/event/app_rpc.o 00:05:15.652 CC lib/event/scheduler_static.o 00:05:15.652 CC lib/bdev/bdev.o 00:05:15.652 CC lib/bdev/bdev_rpc.o 00:05:15.652 LIB libspdk_nvme.a 00:05:15.652 LIB libspdk_fsdev.a 00:05:15.652 SO libspdk_fsdev.so.2.0 00:05:15.652 CC lib/bdev/bdev_zone.o 00:05:15.652 CC lib/bdev/part.o 00:05:15.652 SYMLINK libspdk_fsdev.so 00:05:15.652 CC lib/bdev/scsi_nvme.o 00:05:15.912 SO libspdk_nvme.so.15.0 00:05:15.912 LIB libspdk_event.a 00:05:16.171 SO libspdk_event.so.14.0 00:05:16.171 SYMLINK libspdk_nvme.so 00:05:16.171 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:16.171 SYMLINK libspdk_event.so 00:05:16.738 LIB libspdk_fuse_dispatcher.a 00:05:16.738 SO libspdk_fuse_dispatcher.so.1.0 00:05:16.738 SYMLINK libspdk_fuse_dispatcher.so 00:05:17.306 LIB libspdk_blob.a 00:05:17.306 SO libspdk_blob.so.12.0 00:05:17.306 SYMLINK libspdk_blob.so 00:05:17.565 CC lib/blobfs/blobfs.o 00:05:17.565 CC lib/blobfs/tree.o 00:05:17.565 CC lib/lvol/lvol.o 00:05:18.132 LIB libspdk_bdev.a 00:05:18.391 SO libspdk_bdev.so.17.0 00:05:18.391 SYMLINK libspdk_bdev.so 00:05:18.391 LIB libspdk_blobfs.a 00:05:18.650 LIB libspdk_lvol.a 00:05:18.650 SO libspdk_blobfs.so.11.0 00:05:18.650 SO libspdk_lvol.so.11.0 00:05:18.650 SYMLINK libspdk_blobfs.so 00:05:18.650 CC lib/nbd/nbd.o 00:05:18.650 CC lib/nbd/nbd_rpc.o 00:05:18.650 CC lib/ublk/ublk.o 00:05:18.650 CC lib/ublk/ublk_rpc.o 00:05:18.650 CC lib/nvmf/ctrlr.o 00:05:18.650 CC lib/nvmf/ctrlr_discovery.o 00:05:18.650 CC lib/nvmf/ctrlr_bdev.o 00:05:18.650 SYMLINK libspdk_lvol.so 00:05:18.650 CC lib/scsi/dev.o 00:05:18.650 CC lib/scsi/lun.o 00:05:18.650 CC lib/ftl/ftl_core.o 00:05:18.909 CC lib/scsi/port.o 00:05:18.909 CC lib/nvmf/subsystem.o 00:05:18.909 CC lib/nvmf/nvmf.o 00:05:18.909 CC lib/scsi/scsi.o 00:05:18.909 CC lib/scsi/scsi_bdev.o 00:05:18.909 LIB libspdk_nbd.a 00:05:19.167 CC lib/ftl/ftl_init.o 00:05:19.167 SO libspdk_nbd.so.7.0 00:05:19.167 SYMLINK libspdk_nbd.so 00:05:19.167 CC lib/ftl/ftl_layout.o 00:05:19.167 CC lib/ftl/ftl_debug.o 00:05:19.167 CC lib/ftl/ftl_io.o 00:05:19.167 LIB libspdk_ublk.a 00:05:19.167 CC lib/ftl/ftl_sb.o 00:05:19.167 SO libspdk_ublk.so.3.0 00:05:19.426 CC lib/ftl/ftl_l2p.o 00:05:19.426 SYMLINK libspdk_ublk.so 00:05:19.426 CC lib/ftl/ftl_l2p_flat.o 00:05:19.426 CC lib/ftl/ftl_nv_cache.o 00:05:19.426 CC lib/scsi/scsi_pr.o 00:05:19.426 CC lib/scsi/scsi_rpc.o 00:05:19.426 CC lib/scsi/task.o 00:05:19.426 CC lib/ftl/ftl_band.o 00:05:19.426 CC lib/ftl/ftl_band_ops.o 00:05:19.684 CC lib/ftl/ftl_writer.o 00:05:19.684 CC lib/ftl/ftl_rq.o 00:05:19.684 CC lib/nvmf/nvmf_rpc.o 00:05:19.684 LIB libspdk_scsi.a 00:05:19.943 CC lib/ftl/ftl_reloc.o 00:05:19.943 SO libspdk_scsi.so.9.0 00:05:19.943 CC lib/nvmf/transport.o 00:05:19.943 CC lib/ftl/ftl_l2p_cache.o 00:05:19.943 CC lib/nvmf/tcp.o 00:05:19.943 SYMLINK libspdk_scsi.so 00:05:19.943 CC lib/ftl/ftl_p2l.o 00:05:19.943 CC lib/nvmf/stubs.o 00:05:20.201 CC lib/nvmf/mdns_server.o 00:05:20.201 CC lib/nvmf/rdma.o 00:05:20.201 CC lib/nvmf/auth.o 00:05:20.201 CC lib/ftl/ftl_p2l_log.o 00:05:20.460 CC lib/ftl/mngt/ftl_mngt.o 00:05:20.460 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:20.460 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:20.460 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:20.460 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:20.719 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:20.719 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:20.719 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:20.719 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:20.719 CC lib/iscsi/conn.o 00:05:20.719 CC lib/vhost/vhost.o 00:05:20.978 CC lib/vhost/vhost_rpc.o 00:05:20.978 CC lib/iscsi/init_grp.o 00:05:20.978 CC lib/iscsi/iscsi.o 00:05:20.978 CC lib/iscsi/param.o 00:05:20.978 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:21.237 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:21.237 CC lib/iscsi/portal_grp.o 00:05:21.237 CC lib/iscsi/tgt_node.o 00:05:21.237 CC lib/iscsi/iscsi_subsystem.o 00:05:21.237 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:21.496 CC lib/iscsi/iscsi_rpc.o 00:05:21.496 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:21.496 CC lib/ftl/utils/ftl_conf.o 00:05:21.496 CC lib/ftl/utils/ftl_md.o 00:05:21.755 CC lib/vhost/vhost_scsi.o 00:05:21.755 CC lib/ftl/utils/ftl_mempool.o 00:05:21.755 CC lib/iscsi/task.o 00:05:21.755 CC lib/ftl/utils/ftl_bitmap.o 00:05:21.755 CC lib/vhost/vhost_blk.o 00:05:21.755 CC lib/vhost/rte_vhost_user.o 00:05:21.755 CC lib/ftl/utils/ftl_property.o 00:05:22.014 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:22.014 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:22.014 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:22.014 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:22.014 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:22.272 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:22.272 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:22.272 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:22.272 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:22.272 LIB libspdk_nvmf.a 00:05:22.272 LIB libspdk_iscsi.a 00:05:22.272 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:22.272 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:22.531 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:22.531 SO libspdk_iscsi.so.8.0 00:05:22.531 SO libspdk_nvmf.so.20.0 00:05:22.531 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:22.531 CC lib/ftl/base/ftl_base_dev.o 00:05:22.531 SYMLINK libspdk_iscsi.so 00:05:22.531 CC lib/ftl/base/ftl_base_bdev.o 00:05:22.531 CC lib/ftl/ftl_trace.o 00:05:22.531 SYMLINK libspdk_nvmf.so 00:05:22.790 LIB libspdk_ftl.a 00:05:23.049 LIB libspdk_vhost.a 00:05:23.049 SO libspdk_vhost.so.8.0 00:05:23.049 SYMLINK libspdk_vhost.so 00:05:23.049 SO libspdk_ftl.so.9.0 00:05:23.307 SYMLINK libspdk_ftl.so 00:05:23.874 CC module/env_dpdk/env_dpdk_rpc.o 00:05:23.874 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:23.874 CC module/blob/bdev/blob_bdev.o 00:05:23.874 CC module/accel/error/accel_error.o 00:05:23.874 CC module/scheduler/gscheduler/gscheduler.o 00:05:23.874 CC module/accel/ioat/accel_ioat.o 00:05:23.874 CC module/fsdev/aio/fsdev_aio.o 00:05:23.874 CC module/sock/posix/posix.o 00:05:23.874 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:23.874 CC module/keyring/file/keyring.o 00:05:23.874 LIB libspdk_env_dpdk_rpc.a 00:05:23.874 SO libspdk_env_dpdk_rpc.so.6.0 00:05:23.874 SYMLINK libspdk_env_dpdk_rpc.so 00:05:23.874 CC module/keyring/file/keyring_rpc.o 00:05:24.133 LIB libspdk_scheduler_gscheduler.a 00:05:24.133 LIB libspdk_scheduler_dpdk_governor.a 00:05:24.133 CC module/accel/ioat/accel_ioat_rpc.o 00:05:24.133 SO libspdk_scheduler_gscheduler.so.4.0 00:05:24.133 CC module/accel/error/accel_error_rpc.o 00:05:24.133 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:24.133 LIB libspdk_scheduler_dynamic.a 00:05:24.133 SO libspdk_scheduler_dynamic.so.4.0 00:05:24.133 LIB libspdk_blob_bdev.a 00:05:24.133 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:24.133 SYMLINK libspdk_scheduler_dynamic.so 00:05:24.133 SYMLINK libspdk_scheduler_gscheduler.so 00:05:24.133 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:24.133 LIB libspdk_keyring_file.a 00:05:24.133 SO libspdk_blob_bdev.so.12.0 00:05:24.133 SO libspdk_keyring_file.so.2.0 00:05:24.133 LIB libspdk_accel_ioat.a 00:05:24.133 LIB libspdk_accel_error.a 00:05:24.133 SO libspdk_accel_ioat.so.6.0 00:05:24.133 SYMLINK libspdk_blob_bdev.so 00:05:24.133 SO libspdk_accel_error.so.2.0 00:05:24.133 SYMLINK libspdk_keyring_file.so 00:05:24.133 CC module/fsdev/aio/linux_aio_mgr.o 00:05:24.133 CC module/sock/uring/uring.o 00:05:24.391 SYMLINK libspdk_accel_ioat.so 00:05:24.391 SYMLINK libspdk_accel_error.so 00:05:24.391 CC module/accel/dsa/accel_dsa.o 00:05:24.392 CC module/keyring/linux/keyring.o 00:05:24.392 CC module/accel/dsa/accel_dsa_rpc.o 00:05:24.392 CC module/accel/iaa/accel_iaa.o 00:05:24.392 CC module/accel/iaa/accel_iaa_rpc.o 00:05:24.392 CC module/keyring/linux/keyring_rpc.o 00:05:24.392 LIB libspdk_fsdev_aio.a 00:05:24.650 SO libspdk_fsdev_aio.so.1.0 00:05:24.650 CC module/blobfs/bdev/blobfs_bdev.o 00:05:24.650 LIB libspdk_sock_posix.a 00:05:24.650 CC module/bdev/delay/vbdev_delay.o 00:05:24.650 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:24.650 SO libspdk_sock_posix.so.6.0 00:05:24.650 LIB libspdk_keyring_linux.a 00:05:24.650 SYMLINK libspdk_fsdev_aio.so 00:05:24.650 LIB libspdk_accel_dsa.a 00:05:24.650 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:24.650 SO libspdk_keyring_linux.so.1.0 00:05:24.650 LIB libspdk_accel_iaa.a 00:05:24.650 SO libspdk_accel_dsa.so.5.0 00:05:24.650 CC module/bdev/error/vbdev_error.o 00:05:24.650 SYMLINK libspdk_sock_posix.so 00:05:24.650 CC module/bdev/error/vbdev_error_rpc.o 00:05:24.650 SO libspdk_accel_iaa.so.3.0 00:05:24.650 SYMLINK libspdk_keyring_linux.so 00:05:24.650 SYMLINK libspdk_accel_dsa.so 00:05:24.650 SYMLINK libspdk_accel_iaa.so 00:05:24.909 LIB libspdk_blobfs_bdev.a 00:05:24.909 SO libspdk_blobfs_bdev.so.6.0 00:05:24.909 SYMLINK libspdk_blobfs_bdev.so 00:05:24.909 CC module/bdev/gpt/gpt.o 00:05:24.909 CC module/bdev/lvol/vbdev_lvol.o 00:05:24.909 CC module/bdev/malloc/bdev_malloc.o 00:05:24.909 LIB libspdk_sock_uring.a 00:05:24.909 LIB libspdk_bdev_delay.a 00:05:24.909 LIB libspdk_bdev_error.a 00:05:24.909 CC module/bdev/null/bdev_null.o 00:05:24.909 SO libspdk_sock_uring.so.5.0 00:05:24.909 SO libspdk_bdev_delay.so.6.0 00:05:24.909 SO libspdk_bdev_error.so.6.0 00:05:25.169 CC module/bdev/nvme/bdev_nvme.o 00:05:25.169 SYMLINK libspdk_bdev_delay.so 00:05:25.169 SYMLINK libspdk_sock_uring.so 00:05:25.169 SYMLINK libspdk_bdev_error.so 00:05:25.169 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:25.169 CC module/bdev/passthru/vbdev_passthru.o 00:05:25.169 CC module/bdev/null/bdev_null_rpc.o 00:05:25.169 CC module/bdev/raid/bdev_raid.o 00:05:25.169 CC module/bdev/gpt/vbdev_gpt.o 00:05:25.169 CC module/bdev/raid/bdev_raid_rpc.o 00:05:25.169 CC module/bdev/split/vbdev_split.o 00:05:25.169 LIB libspdk_bdev_null.a 00:05:25.169 CC module/bdev/raid/bdev_raid_sb.o 00:05:25.428 SO libspdk_bdev_null.so.6.0 00:05:25.428 LIB libspdk_bdev_malloc.a 00:05:25.428 SYMLINK libspdk_bdev_null.so 00:05:25.428 SO libspdk_bdev_malloc.so.6.0 00:05:25.428 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:25.428 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:25.428 LIB libspdk_bdev_gpt.a 00:05:25.428 SYMLINK libspdk_bdev_malloc.so 00:05:25.428 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:25.428 SO libspdk_bdev_gpt.so.6.0 00:05:25.428 CC module/bdev/raid/raid0.o 00:05:25.428 CC module/bdev/split/vbdev_split_rpc.o 00:05:25.428 CC module/bdev/raid/raid1.o 00:05:25.428 SYMLINK libspdk_bdev_gpt.so 00:05:25.428 CC module/bdev/nvme/nvme_rpc.o 00:05:25.428 CC module/bdev/raid/concat.o 00:05:25.686 LIB libspdk_bdev_passthru.a 00:05:25.686 SO libspdk_bdev_passthru.so.6.0 00:05:25.686 SYMLINK libspdk_bdev_passthru.so 00:05:25.686 LIB libspdk_bdev_split.a 00:05:25.686 SO libspdk_bdev_split.so.6.0 00:05:25.686 LIB libspdk_bdev_lvol.a 00:05:25.686 CC module/bdev/nvme/bdev_mdns_client.o 00:05:25.686 SYMLINK libspdk_bdev_split.so 00:05:25.686 CC module/bdev/nvme/vbdev_opal.o 00:05:25.945 SO libspdk_bdev_lvol.so.6.0 00:05:25.945 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:25.945 SYMLINK libspdk_bdev_lvol.so 00:05:25.945 CC module/bdev/uring/bdev_uring.o 00:05:25.945 CC module/bdev/ftl/bdev_ftl.o 00:05:25.945 CC module/bdev/aio/bdev_aio.o 00:05:25.945 CC module/bdev/aio/bdev_aio_rpc.o 00:05:25.945 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:26.204 CC module/bdev/uring/bdev_uring_rpc.o 00:05:26.204 LIB libspdk_bdev_raid.a 00:05:26.204 SO libspdk_bdev_raid.so.6.0 00:05:26.204 CC module/bdev/iscsi/bdev_iscsi.o 00:05:26.204 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:26.204 LIB libspdk_bdev_zone_block.a 00:05:26.204 SYMLINK libspdk_bdev_raid.so 00:05:26.204 SO libspdk_bdev_zone_block.so.6.0 00:05:26.204 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:26.204 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:26.204 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:26.462 SYMLINK libspdk_bdev_zone_block.so 00:05:26.462 LIB libspdk_bdev_uring.a 00:05:26.462 LIB libspdk_bdev_aio.a 00:05:26.462 SO libspdk_bdev_uring.so.6.0 00:05:26.462 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:26.462 SO libspdk_bdev_aio.so.6.0 00:05:26.462 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:26.462 SYMLINK libspdk_bdev_uring.so 00:05:26.462 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:26.462 SYMLINK libspdk_bdev_aio.so 00:05:26.462 LIB libspdk_bdev_ftl.a 00:05:26.462 SO libspdk_bdev_ftl.so.6.0 00:05:26.721 LIB libspdk_bdev_iscsi.a 00:05:26.721 SO libspdk_bdev_iscsi.so.6.0 00:05:26.721 SYMLINK libspdk_bdev_ftl.so 00:05:26.721 SYMLINK libspdk_bdev_iscsi.so 00:05:26.979 LIB libspdk_bdev_virtio.a 00:05:26.979 SO libspdk_bdev_virtio.so.6.0 00:05:26.979 SYMLINK libspdk_bdev_virtio.so 00:05:27.915 LIB libspdk_bdev_nvme.a 00:05:27.915 SO libspdk_bdev_nvme.so.7.1 00:05:27.915 SYMLINK libspdk_bdev_nvme.so 00:05:28.482 CC module/event/subsystems/sock/sock.o 00:05:28.482 CC module/event/subsystems/scheduler/scheduler.o 00:05:28.482 CC module/event/subsystems/iobuf/iobuf.o 00:05:28.482 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:28.482 CC module/event/subsystems/vmd/vmd.o 00:05:28.482 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:28.482 CC module/event/subsystems/fsdev/fsdev.o 00:05:28.482 CC module/event/subsystems/keyring/keyring.o 00:05:28.482 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:28.482 LIB libspdk_event_keyring.a 00:05:28.482 LIB libspdk_event_fsdev.a 00:05:28.482 LIB libspdk_event_sock.a 00:05:28.482 SO libspdk_event_keyring.so.1.0 00:05:28.482 LIB libspdk_event_scheduler.a 00:05:28.482 LIB libspdk_event_vhost_blk.a 00:05:28.482 SO libspdk_event_fsdev.so.1.0 00:05:28.482 LIB libspdk_event_iobuf.a 00:05:28.482 SO libspdk_event_sock.so.5.0 00:05:28.482 LIB libspdk_event_vmd.a 00:05:28.482 SO libspdk_event_scheduler.so.4.0 00:05:28.482 SO libspdk_event_vhost_blk.so.3.0 00:05:28.482 SO libspdk_event_iobuf.so.3.0 00:05:28.482 SO libspdk_event_vmd.so.6.0 00:05:28.482 SYMLINK libspdk_event_keyring.so 00:05:28.482 SYMLINK libspdk_event_fsdev.so 00:05:28.482 SYMLINK libspdk_event_sock.so 00:05:28.741 SYMLINK libspdk_event_scheduler.so 00:05:28.741 SYMLINK libspdk_event_vhost_blk.so 00:05:28.741 SYMLINK libspdk_event_iobuf.so 00:05:28.741 SYMLINK libspdk_event_vmd.so 00:05:29.000 CC module/event/subsystems/accel/accel.o 00:05:29.000 LIB libspdk_event_accel.a 00:05:29.259 SO libspdk_event_accel.so.6.0 00:05:29.259 SYMLINK libspdk_event_accel.so 00:05:29.518 CC module/event/subsystems/bdev/bdev.o 00:05:29.777 LIB libspdk_event_bdev.a 00:05:29.777 SO libspdk_event_bdev.so.6.0 00:05:29.777 SYMLINK libspdk_event_bdev.so 00:05:30.036 CC module/event/subsystems/scsi/scsi.o 00:05:30.036 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:30.036 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:30.036 CC module/event/subsystems/ublk/ublk.o 00:05:30.036 CC module/event/subsystems/nbd/nbd.o 00:05:30.294 LIB libspdk_event_scsi.a 00:05:30.294 LIB libspdk_event_nbd.a 00:05:30.294 LIB libspdk_event_ublk.a 00:05:30.294 SO libspdk_event_scsi.so.6.0 00:05:30.294 SO libspdk_event_ublk.so.3.0 00:05:30.294 SO libspdk_event_nbd.so.6.0 00:05:30.294 SYMLINK libspdk_event_scsi.so 00:05:30.294 SYMLINK libspdk_event_ublk.so 00:05:30.294 SYMLINK libspdk_event_nbd.so 00:05:30.294 LIB libspdk_event_nvmf.a 00:05:30.294 SO libspdk_event_nvmf.so.6.0 00:05:30.553 SYMLINK libspdk_event_nvmf.so 00:05:30.553 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:30.553 CC module/event/subsystems/iscsi/iscsi.o 00:05:30.813 LIB libspdk_event_vhost_scsi.a 00:05:30.813 LIB libspdk_event_iscsi.a 00:05:30.813 SO libspdk_event_vhost_scsi.so.3.0 00:05:30.813 SO libspdk_event_iscsi.so.6.0 00:05:30.813 SYMLINK libspdk_event_vhost_scsi.so 00:05:30.813 SYMLINK libspdk_event_iscsi.so 00:05:31.072 SO libspdk.so.6.0 00:05:31.072 SYMLINK libspdk.so 00:05:31.330 CC test/rpc_client/rpc_client_test.o 00:05:31.330 TEST_HEADER include/spdk/accel.h 00:05:31.330 TEST_HEADER include/spdk/accel_module.h 00:05:31.330 CXX app/trace/trace.o 00:05:31.330 TEST_HEADER include/spdk/assert.h 00:05:31.330 TEST_HEADER include/spdk/barrier.h 00:05:31.330 TEST_HEADER include/spdk/base64.h 00:05:31.330 TEST_HEADER include/spdk/bdev.h 00:05:31.330 TEST_HEADER include/spdk/bdev_module.h 00:05:31.330 TEST_HEADER include/spdk/bdev_zone.h 00:05:31.330 TEST_HEADER include/spdk/bit_array.h 00:05:31.330 TEST_HEADER include/spdk/bit_pool.h 00:05:31.330 TEST_HEADER include/spdk/blob_bdev.h 00:05:31.330 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:31.330 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:31.330 TEST_HEADER include/spdk/blobfs.h 00:05:31.330 TEST_HEADER include/spdk/blob.h 00:05:31.330 TEST_HEADER include/spdk/conf.h 00:05:31.330 TEST_HEADER include/spdk/config.h 00:05:31.330 TEST_HEADER include/spdk/cpuset.h 00:05:31.330 TEST_HEADER include/spdk/crc16.h 00:05:31.330 TEST_HEADER include/spdk/crc32.h 00:05:31.330 TEST_HEADER include/spdk/crc64.h 00:05:31.330 TEST_HEADER include/spdk/dif.h 00:05:31.330 TEST_HEADER include/spdk/dma.h 00:05:31.330 TEST_HEADER include/spdk/endian.h 00:05:31.330 TEST_HEADER include/spdk/env_dpdk.h 00:05:31.330 TEST_HEADER include/spdk/env.h 00:05:31.330 TEST_HEADER include/spdk/event.h 00:05:31.330 TEST_HEADER include/spdk/fd_group.h 00:05:31.330 TEST_HEADER include/spdk/fd.h 00:05:31.330 TEST_HEADER include/spdk/file.h 00:05:31.330 TEST_HEADER include/spdk/fsdev.h 00:05:31.330 TEST_HEADER include/spdk/fsdev_module.h 00:05:31.330 TEST_HEADER include/spdk/ftl.h 00:05:31.331 TEST_HEADER include/spdk/gpt_spec.h 00:05:31.590 TEST_HEADER include/spdk/hexlify.h 00:05:31.590 TEST_HEADER include/spdk/histogram_data.h 00:05:31.590 TEST_HEADER include/spdk/idxd.h 00:05:31.590 TEST_HEADER include/spdk/idxd_spec.h 00:05:31.590 TEST_HEADER include/spdk/init.h 00:05:31.590 CC test/thread/poller_perf/poller_perf.o 00:05:31.590 CC examples/ioat/perf/perf.o 00:05:31.590 TEST_HEADER include/spdk/ioat.h 00:05:31.590 TEST_HEADER include/spdk/ioat_spec.h 00:05:31.590 TEST_HEADER include/spdk/iscsi_spec.h 00:05:31.590 TEST_HEADER include/spdk/json.h 00:05:31.590 CC examples/util/zipf/zipf.o 00:05:31.590 TEST_HEADER include/spdk/jsonrpc.h 00:05:31.590 TEST_HEADER include/spdk/keyring.h 00:05:31.590 TEST_HEADER include/spdk/keyring_module.h 00:05:31.590 TEST_HEADER include/spdk/likely.h 00:05:31.590 TEST_HEADER include/spdk/log.h 00:05:31.590 TEST_HEADER include/spdk/lvol.h 00:05:31.590 TEST_HEADER include/spdk/md5.h 00:05:31.590 TEST_HEADER include/spdk/memory.h 00:05:31.590 TEST_HEADER include/spdk/mmio.h 00:05:31.590 TEST_HEADER include/spdk/nbd.h 00:05:31.590 TEST_HEADER include/spdk/net.h 00:05:31.590 TEST_HEADER include/spdk/notify.h 00:05:31.590 TEST_HEADER include/spdk/nvme.h 00:05:31.590 TEST_HEADER include/spdk/nvme_intel.h 00:05:31.590 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:31.590 CC test/app/bdev_svc/bdev_svc.o 00:05:31.590 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:31.590 CC test/dma/test_dma/test_dma.o 00:05:31.590 TEST_HEADER include/spdk/nvme_spec.h 00:05:31.590 TEST_HEADER include/spdk/nvme_zns.h 00:05:31.590 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:31.590 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:31.590 TEST_HEADER include/spdk/nvmf.h 00:05:31.590 TEST_HEADER include/spdk/nvmf_spec.h 00:05:31.590 TEST_HEADER include/spdk/nvmf_transport.h 00:05:31.590 TEST_HEADER include/spdk/opal.h 00:05:31.590 TEST_HEADER include/spdk/opal_spec.h 00:05:31.590 TEST_HEADER include/spdk/pci_ids.h 00:05:31.590 TEST_HEADER include/spdk/pipe.h 00:05:31.590 TEST_HEADER include/spdk/queue.h 00:05:31.590 TEST_HEADER include/spdk/reduce.h 00:05:31.590 TEST_HEADER include/spdk/rpc.h 00:05:31.590 TEST_HEADER include/spdk/scheduler.h 00:05:31.590 TEST_HEADER include/spdk/scsi.h 00:05:31.590 TEST_HEADER include/spdk/scsi_spec.h 00:05:31.590 TEST_HEADER include/spdk/sock.h 00:05:31.590 TEST_HEADER include/spdk/stdinc.h 00:05:31.590 TEST_HEADER include/spdk/string.h 00:05:31.590 TEST_HEADER include/spdk/thread.h 00:05:31.590 TEST_HEADER include/spdk/trace.h 00:05:31.590 TEST_HEADER include/spdk/trace_parser.h 00:05:31.590 TEST_HEADER include/spdk/tree.h 00:05:31.590 TEST_HEADER include/spdk/ublk.h 00:05:31.590 CC test/env/mem_callbacks/mem_callbacks.o 00:05:31.590 TEST_HEADER include/spdk/util.h 00:05:31.590 TEST_HEADER include/spdk/uuid.h 00:05:31.590 LINK rpc_client_test 00:05:31.590 TEST_HEADER include/spdk/version.h 00:05:31.590 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:31.590 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:31.590 TEST_HEADER include/spdk/vhost.h 00:05:31.590 TEST_HEADER include/spdk/vmd.h 00:05:31.590 TEST_HEADER include/spdk/xor.h 00:05:31.590 TEST_HEADER include/spdk/zipf.h 00:05:31.590 CXX test/cpp_headers/accel.o 00:05:31.590 LINK interrupt_tgt 00:05:31.590 LINK poller_perf 00:05:31.590 LINK zipf 00:05:31.849 LINK bdev_svc 00:05:31.849 LINK ioat_perf 00:05:31.849 CXX test/cpp_headers/accel_module.o 00:05:31.849 LINK spdk_trace 00:05:31.849 CXX test/cpp_headers/assert.o 00:05:31.849 LINK mem_callbacks 00:05:31.849 CXX test/cpp_headers/barrier.o 00:05:31.849 CXX test/cpp_headers/base64.o 00:05:31.849 CC app/trace_record/trace_record.o 00:05:32.108 CXX test/cpp_headers/bdev.o 00:05:32.108 CC examples/ioat/verify/verify.o 00:05:32.108 CC test/env/vtophys/vtophys.o 00:05:32.108 LINK test_dma 00:05:32.108 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:32.108 CC test/env/memory/memory_ut.o 00:05:32.108 CC test/app/histogram_perf/histogram_perf.o 00:05:32.108 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:32.108 LINK spdk_trace_record 00:05:32.367 LINK vtophys 00:05:32.367 CC test/event/event_perf/event_perf.o 00:05:32.367 CXX test/cpp_headers/bdev_module.o 00:05:32.367 LINK histogram_perf 00:05:32.367 LINK verify 00:05:32.367 CXX test/cpp_headers/bdev_zone.o 00:05:32.367 LINK env_dpdk_post_init 00:05:32.367 CXX test/cpp_headers/bit_array.o 00:05:32.367 LINK event_perf 00:05:32.665 CC app/nvmf_tgt/nvmf_main.o 00:05:32.665 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:32.665 CXX test/cpp_headers/bit_pool.o 00:05:32.665 LINK nvme_fuzz 00:05:32.665 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:32.665 CC test/env/pci/pci_ut.o 00:05:32.665 CC examples/thread/thread/thread_ex.o 00:05:32.665 CC examples/sock/hello_world/hello_sock.o 00:05:32.665 CC test/event/reactor/reactor.o 00:05:32.665 LINK nvmf_tgt 00:05:32.665 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:32.926 CXX test/cpp_headers/blob_bdev.o 00:05:32.926 CXX test/cpp_headers/blobfs_bdev.o 00:05:32.926 LINK reactor 00:05:32.926 LINK memory_ut 00:05:32.926 LINK thread 00:05:32.926 LINK hello_sock 00:05:32.926 CXX test/cpp_headers/blobfs.o 00:05:32.926 CXX test/cpp_headers/blob.o 00:05:32.926 LINK pci_ut 00:05:33.185 CC app/iscsi_tgt/iscsi_tgt.o 00:05:33.185 CXX test/cpp_headers/conf.o 00:05:33.185 CC test/event/reactor_perf/reactor_perf.o 00:05:33.185 CC app/spdk_tgt/spdk_tgt.o 00:05:33.185 LINK vhost_fuzz 00:05:33.185 CC test/event/app_repeat/app_repeat.o 00:05:33.185 CXX test/cpp_headers/config.o 00:05:33.185 LINK reactor_perf 00:05:33.185 LINK iscsi_tgt 00:05:33.443 CXX test/cpp_headers/cpuset.o 00:05:33.443 CC examples/vmd/lsvmd/lsvmd.o 00:05:33.443 CC test/event/scheduler/scheduler.o 00:05:33.443 LINK spdk_tgt 00:05:33.443 LINK app_repeat 00:05:33.443 LINK lsvmd 00:05:33.443 CXX test/cpp_headers/crc16.o 00:05:33.443 CXX test/cpp_headers/crc32.o 00:05:33.443 CC examples/vmd/led/led.o 00:05:33.443 CC test/accel/dif/dif.o 00:05:33.702 CC test/blobfs/mkfs/mkfs.o 00:05:33.702 CXX test/cpp_headers/crc64.o 00:05:33.702 LINK scheduler 00:05:33.702 CXX test/cpp_headers/dif.o 00:05:33.702 LINK led 00:05:33.702 CC app/spdk_lspci/spdk_lspci.o 00:05:33.702 LINK mkfs 00:05:33.702 CXX test/cpp_headers/dma.o 00:05:33.960 CXX test/cpp_headers/endian.o 00:05:33.960 LINK spdk_lspci 00:05:33.960 CC examples/idxd/perf/perf.o 00:05:33.960 CC app/spdk_nvme_perf/perf.o 00:05:33.960 CC test/lvol/esnap/esnap.o 00:05:33.960 CXX test/cpp_headers/env_dpdk.o 00:05:33.960 CXX test/cpp_headers/env.o 00:05:33.960 CXX test/cpp_headers/event.o 00:05:33.960 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:33.960 CC test/app/jsoncat/jsoncat.o 00:05:34.219 LINK iscsi_fuzz 00:05:34.219 LINK dif 00:05:34.219 CXX test/cpp_headers/fd_group.o 00:05:34.219 LINK jsoncat 00:05:34.219 LINK idxd_perf 00:05:34.219 CC test/app/stub/stub.o 00:05:34.478 LINK hello_fsdev 00:05:34.478 CXX test/cpp_headers/fd.o 00:05:34.478 CC examples/accel/perf/accel_perf.o 00:05:34.478 CXX test/cpp_headers/file.o 00:05:34.478 CXX test/cpp_headers/fsdev.o 00:05:34.478 LINK stub 00:05:34.478 CXX test/cpp_headers/fsdev_module.o 00:05:34.737 CC test/nvme/aer/aer.o 00:05:34.737 CC test/nvme/reset/reset.o 00:05:34.737 CC test/bdev/bdevio/bdevio.o 00:05:34.737 CC examples/nvme/hello_world/hello_world.o 00:05:34.737 CC test/nvme/sgl/sgl.o 00:05:34.737 CXX test/cpp_headers/ftl.o 00:05:34.737 CC examples/blob/hello_world/hello_blob.o 00:05:34.737 LINK spdk_nvme_perf 00:05:34.996 LINK accel_perf 00:05:34.996 LINK aer 00:05:34.996 LINK reset 00:05:34.996 LINK hello_world 00:05:34.996 CXX test/cpp_headers/gpt_spec.o 00:05:34.996 LINK sgl 00:05:34.996 CXX test/cpp_headers/hexlify.o 00:05:34.996 LINK hello_blob 00:05:34.996 CC app/spdk_nvme_identify/identify.o 00:05:35.255 LINK bdevio 00:05:35.255 CXX test/cpp_headers/histogram_data.o 00:05:35.255 CC test/nvme/e2edp/nvme_dp.o 00:05:35.255 CC examples/nvme/reconnect/reconnect.o 00:05:35.255 CC test/nvme/overhead/overhead.o 00:05:35.255 CC test/nvme/err_injection/err_injection.o 00:05:35.255 CC examples/bdev/hello_world/hello_bdev.o 00:05:35.513 CXX test/cpp_headers/idxd.o 00:05:35.513 CC examples/blob/cli/blobcli.o 00:05:35.513 CC app/spdk_nvme_discover/discovery_aer.o 00:05:35.513 LINK nvme_dp 00:05:35.513 LINK err_injection 00:05:35.513 CXX test/cpp_headers/idxd_spec.o 00:05:35.513 LINK hello_bdev 00:05:35.513 LINK overhead 00:05:35.772 LINK reconnect 00:05:35.772 LINK spdk_nvme_discover 00:05:35.772 CXX test/cpp_headers/init.o 00:05:35.772 CC app/spdk_top/spdk_top.o 00:05:35.772 CC test/nvme/startup/startup.o 00:05:36.031 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:36.031 LINK blobcli 00:05:36.031 LINK spdk_nvme_identify 00:05:36.031 CXX test/cpp_headers/ioat.o 00:05:36.031 CC examples/bdev/bdevperf/bdevperf.o 00:05:36.031 CC examples/nvme/arbitration/arbitration.o 00:05:36.031 CC app/vhost/vhost.o 00:05:36.031 LINK startup 00:05:36.031 CXX test/cpp_headers/ioat_spec.o 00:05:36.290 CC examples/nvme/hotplug/hotplug.o 00:05:36.290 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:36.290 LINK vhost 00:05:36.290 CC test/nvme/reserve/reserve.o 00:05:36.290 LINK arbitration 00:05:36.290 CXX test/cpp_headers/iscsi_spec.o 00:05:36.290 LINK cmb_copy 00:05:36.290 LINK nvme_manage 00:05:36.549 LINK hotplug 00:05:36.549 LINK reserve 00:05:36.549 CXX test/cpp_headers/json.o 00:05:36.549 CC app/spdk_dd/spdk_dd.o 00:05:36.549 CC examples/nvme/abort/abort.o 00:05:36.549 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:36.808 CC app/fio/nvme/fio_plugin.o 00:05:36.808 CXX test/cpp_headers/jsonrpc.o 00:05:36.808 LINK spdk_top 00:05:36.808 CC app/fio/bdev/fio_plugin.o 00:05:36.808 LINK bdevperf 00:05:36.808 CC test/nvme/simple_copy/simple_copy.o 00:05:36.808 LINK pmr_persistence 00:05:36.808 CXX test/cpp_headers/keyring.o 00:05:37.067 CC test/nvme/connect_stress/connect_stress.o 00:05:37.067 CXX test/cpp_headers/keyring_module.o 00:05:37.067 CXX test/cpp_headers/likely.o 00:05:37.067 LINK spdk_dd 00:05:37.067 LINK simple_copy 00:05:37.067 LINK abort 00:05:37.067 CC test/nvme/boot_partition/boot_partition.o 00:05:37.067 LINK connect_stress 00:05:37.067 CXX test/cpp_headers/log.o 00:05:37.067 CXX test/cpp_headers/lvol.o 00:05:37.326 CXX test/cpp_headers/md5.o 00:05:37.326 LINK spdk_nvme 00:05:37.326 LINK spdk_bdev 00:05:37.326 CC test/nvme/compliance/nvme_compliance.o 00:05:37.326 LINK boot_partition 00:05:37.326 CXX test/cpp_headers/memory.o 00:05:37.326 CC test/nvme/fused_ordering/fused_ordering.o 00:05:37.326 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:37.585 CC examples/nvmf/nvmf/nvmf.o 00:05:37.585 CC test/nvme/fdp/fdp.o 00:05:37.585 CXX test/cpp_headers/mmio.o 00:05:37.585 CC test/nvme/cuse/cuse.o 00:05:37.585 CXX test/cpp_headers/nbd.o 00:05:37.585 CXX test/cpp_headers/net.o 00:05:37.585 CXX test/cpp_headers/notify.o 00:05:37.585 LINK nvme_compliance 00:05:37.585 LINK fused_ordering 00:05:37.585 CXX test/cpp_headers/nvme.o 00:05:37.585 LINK doorbell_aers 00:05:37.585 CXX test/cpp_headers/nvme_intel.o 00:05:37.843 CXX test/cpp_headers/nvme_ocssd.o 00:05:37.843 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:37.843 LINK nvmf 00:05:37.843 CXX test/cpp_headers/nvme_spec.o 00:05:37.843 CXX test/cpp_headers/nvme_zns.o 00:05:37.843 LINK fdp 00:05:37.843 CXX test/cpp_headers/nvmf_cmd.o 00:05:37.843 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:37.843 CXX test/cpp_headers/nvmf.o 00:05:37.843 CXX test/cpp_headers/nvmf_spec.o 00:05:37.844 CXX test/cpp_headers/nvmf_transport.o 00:05:38.102 CXX test/cpp_headers/opal.o 00:05:38.102 CXX test/cpp_headers/opal_spec.o 00:05:38.102 CXX test/cpp_headers/pci_ids.o 00:05:38.102 CXX test/cpp_headers/pipe.o 00:05:38.102 CXX test/cpp_headers/queue.o 00:05:38.102 CXX test/cpp_headers/reduce.o 00:05:38.102 CXX test/cpp_headers/rpc.o 00:05:38.103 CXX test/cpp_headers/scheduler.o 00:05:38.103 CXX test/cpp_headers/scsi.o 00:05:38.103 CXX test/cpp_headers/scsi_spec.o 00:05:38.103 CXX test/cpp_headers/sock.o 00:05:38.103 CXX test/cpp_headers/stdinc.o 00:05:38.103 CXX test/cpp_headers/string.o 00:05:38.361 CXX test/cpp_headers/thread.o 00:05:38.362 CXX test/cpp_headers/trace.o 00:05:38.362 CXX test/cpp_headers/trace_parser.o 00:05:38.362 CXX test/cpp_headers/tree.o 00:05:38.362 CXX test/cpp_headers/ublk.o 00:05:38.362 CXX test/cpp_headers/util.o 00:05:38.362 CXX test/cpp_headers/uuid.o 00:05:38.362 CXX test/cpp_headers/version.o 00:05:38.362 CXX test/cpp_headers/vfio_user_pci.o 00:05:38.362 CXX test/cpp_headers/vfio_user_spec.o 00:05:38.362 CXX test/cpp_headers/vhost.o 00:05:38.620 CXX test/cpp_headers/vmd.o 00:05:38.620 CXX test/cpp_headers/xor.o 00:05:38.620 CXX test/cpp_headers/zipf.o 00:05:38.882 LINK cuse 00:05:39.141 LINK esnap 00:05:39.399 ************************************ 00:05:39.399 END TEST make 00:05:39.399 ************************************ 00:05:39.399 00:05:39.399 real 1m25.493s 00:05:39.399 user 6m55.788s 00:05:39.399 sys 1m10.633s 00:05:39.399 14:21:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:39.399 14:21:31 make -- common/autotest_common.sh@10 -- $ set +x 00:05:39.399 14:21:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:39.399 14:21:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:39.399 14:21:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:39.399 14:21:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.399 14:21:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:39.399 14:21:31 -- pm/common@44 -- $ pid=6040 00:05:39.399 14:21:31 -- pm/common@50 -- $ kill -TERM 6040 00:05:39.400 14:21:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.400 14:21:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:39.400 14:21:31 -- pm/common@44 -- $ pid=6042 00:05:39.400 14:21:31 -- pm/common@50 -- $ kill -TERM 6042 00:05:39.400 14:21:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:39.400 14:21:31 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:39.400 14:21:31 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:39.400 14:21:31 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:39.400 14:21:31 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:39.659 14:21:31 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:39.659 14:21:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.659 14:21:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.659 14:21:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.659 14:21:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.659 14:21:31 -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.659 14:21:31 -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.659 14:21:31 -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.659 14:21:31 -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.659 14:21:31 -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.659 14:21:31 -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.659 14:21:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.659 14:21:31 -- scripts/common.sh@344 -- # case "$op" in 00:05:39.659 14:21:31 -- scripts/common.sh@345 -- # : 1 00:05:39.659 14:21:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.659 14:21:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.659 14:21:31 -- scripts/common.sh@365 -- # decimal 1 00:05:39.659 14:21:31 -- scripts/common.sh@353 -- # local d=1 00:05:39.659 14:21:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.659 14:21:31 -- scripts/common.sh@355 -- # echo 1 00:05:39.659 14:21:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.659 14:21:31 -- scripts/common.sh@366 -- # decimal 2 00:05:39.659 14:21:31 -- scripts/common.sh@353 -- # local d=2 00:05:39.659 14:21:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.659 14:21:31 -- scripts/common.sh@355 -- # echo 2 00:05:39.659 14:21:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.659 14:21:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.659 14:21:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.659 14:21:31 -- scripts/common.sh@368 -- # return 0 00:05:39.659 14:21:31 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.659 14:21:31 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.659 --rc genhtml_branch_coverage=1 00:05:39.659 --rc genhtml_function_coverage=1 00:05:39.659 --rc genhtml_legend=1 00:05:39.659 --rc geninfo_all_blocks=1 00:05:39.659 --rc geninfo_unexecuted_blocks=1 00:05:39.659 00:05:39.659 ' 00:05:39.659 14:21:31 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.659 --rc genhtml_branch_coverage=1 00:05:39.659 --rc genhtml_function_coverage=1 00:05:39.659 --rc genhtml_legend=1 00:05:39.659 --rc geninfo_all_blocks=1 00:05:39.659 --rc geninfo_unexecuted_blocks=1 00:05:39.659 00:05:39.659 ' 00:05:39.659 14:21:31 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:39.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.659 --rc genhtml_branch_coverage=1 00:05:39.659 --rc genhtml_function_coverage=1 00:05:39.660 --rc genhtml_legend=1 00:05:39.660 --rc geninfo_all_blocks=1 00:05:39.660 --rc geninfo_unexecuted_blocks=1 00:05:39.660 00:05:39.660 ' 00:05:39.660 14:21:31 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:39.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.660 --rc genhtml_branch_coverage=1 00:05:39.660 --rc genhtml_function_coverage=1 00:05:39.660 --rc genhtml_legend=1 00:05:39.660 --rc geninfo_all_blocks=1 00:05:39.660 --rc geninfo_unexecuted_blocks=1 00:05:39.660 00:05:39.660 ' 00:05:39.660 14:21:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.660 14:21:31 -- nvmf/common.sh@7 -- # uname -s 00:05:39.660 14:21:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.660 14:21:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.660 14:21:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.660 14:21:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.660 14:21:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.660 14:21:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.660 14:21:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.660 14:21:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.660 14:21:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.660 14:21:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.660 14:21:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:05:39.660 14:21:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:05:39.660 14:21:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.660 14:21:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.660 14:21:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:39.660 14:21:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.660 14:21:31 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.660 14:21:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.660 14:21:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.660 14:21:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.660 14:21:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.660 14:21:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.660 14:21:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.660 14:21:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.660 14:21:31 -- paths/export.sh@5 -- # export PATH 00:05:39.660 14:21:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.660 14:21:31 -- nvmf/common.sh@51 -- # : 0 00:05:39.660 14:21:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.660 14:21:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.660 14:21:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.660 14:21:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.660 14:21:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.660 14:21:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.660 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.660 14:21:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.660 14:21:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.660 14:21:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.660 14:21:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:39.660 14:21:31 -- spdk/autotest.sh@32 -- # uname -s 00:05:39.660 14:21:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:39.660 14:21:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:39.660 14:21:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.660 14:21:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:39.660 14:21:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:39.660 14:21:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:39.660 14:21:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:39.660 14:21:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:39.660 14:21:31 -- spdk/autotest.sh@48 -- # udevadm_pid=68424 00:05:39.660 14:21:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:39.660 14:21:31 -- pm/common@17 -- # local monitor 00:05:39.660 14:21:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.660 14:21:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:39.660 14:21:31 -- pm/common@25 -- # sleep 1 00:05:39.660 14:21:31 -- pm/common@21 -- # date +%s 00:05:39.660 14:21:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:39.660 14:21:31 -- pm/common@21 -- # date +%s 00:05:39.660 14:21:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734358891 00:05:39.660 14:21:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734358891 00:05:39.660 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734358891_collect-vmstat.pm.log 00:05:39.660 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734358891_collect-cpu-load.pm.log 00:05:40.596 14:21:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:40.596 14:21:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:40.596 14:21:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.596 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.596 14:21:32 -- spdk/autotest.sh@59 -- # create_test_list 00:05:40.596 14:21:32 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:40.596 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.855 14:21:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:40.855 14:21:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:40.855 14:21:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:40.855 14:21:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:40.855 14:21:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:40.855 14:21:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:40.855 14:21:32 -- common/autotest_common.sh@1457 -- # uname 00:05:40.855 14:21:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:40.855 14:21:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:40.855 14:21:32 -- common/autotest_common.sh@1477 -- # uname 00:05:40.855 14:21:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:40.855 14:21:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:40.855 14:21:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:40.855 lcov: LCOV version 1.15 00:05:40.855 14:21:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:55.747 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:55.748 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:10.629 14:22:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:10.629 14:22:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.629 14:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:10.629 14:22:01 -- spdk/autotest.sh@78 -- # rm -f 00:06:10.629 14:22:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.629 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:10.629 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:10.629 14:22:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:10.629 14:22:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:10.629 14:22:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:10.629 14:22:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:10.629 14:22:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:10.629 14:22:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:10.629 14:22:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:10.629 14:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:10.629 14:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:10.629 14:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:10.629 14:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:10.629 14:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:10.629 14:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:10.629 14:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:10.629 14:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:10.629 14:22:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:10.629 14:22:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:10.629 14:22:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:10.629 14:22:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:10.629 14:22:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:10.629 14:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.629 14:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.629 14:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:10.629 14:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:10.629 14:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:10.629 No valid GPT data, bailing 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # pt= 00:06:10.629 14:22:02 -- scripts/common.sh@395 -- # return 1 00:06:10.629 14:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:10.629 1+0 records in 00:06:10.629 1+0 records out 00:06:10.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00332707 s, 315 MB/s 00:06:10.629 14:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.629 14:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.629 14:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:10.629 14:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:10.629 14:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:10.629 No valid GPT data, bailing 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # pt= 00:06:10.629 14:22:02 -- scripts/common.sh@395 -- # return 1 00:06:10.629 14:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:10.629 1+0 records in 00:06:10.629 1+0 records out 00:06:10.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416909 s, 252 MB/s 00:06:10.629 14:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.629 14:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.629 14:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:10.629 14:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:10.629 14:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:10.629 No valid GPT data, bailing 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:10.629 14:22:02 -- scripts/common.sh@394 -- # pt= 00:06:10.629 14:22:02 -- scripts/common.sh@395 -- # return 1 00:06:10.629 14:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:10.629 1+0 records in 00:06:10.629 1+0 records out 00:06:10.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428471 s, 245 MB/s 00:06:10.629 14:22:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:10.629 14:22:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:10.629 14:22:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:10.629 14:22:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:10.629 14:22:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:10.888 No valid GPT data, bailing 00:06:10.888 14:22:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:10.888 14:22:02 -- scripts/common.sh@394 -- # pt= 00:06:10.888 14:22:02 -- scripts/common.sh@395 -- # return 1 00:06:10.888 14:22:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:10.888 1+0 records in 00:06:10.888 1+0 records out 00:06:10.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421441 s, 249 MB/s 00:06:10.888 14:22:02 -- spdk/autotest.sh@105 -- # sync 00:06:11.147 14:22:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:11.147 14:22:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:11.147 14:22:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.061 14:22:05 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.061 14:22:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.061 14:22:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.061 14:22:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:13.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.628 Hugepages 00:06:13.628 node hugesize free / total 00:06:13.628 node0 1048576kB 0 / 0 00:06:13.628 node0 2048kB 0 / 0 00:06:13.628 00:06:13.628 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:13.924 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:13.924 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:13.924 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:13.924 14:22:05 -- spdk/autotest.sh@117 -- # uname -s 00:06:13.924 14:22:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:13.924 14:22:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:13.924 14:22:05 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.753 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.753 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.753 14:22:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:15.688 14:22:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:15.688 14:22:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:15.688 14:22:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:15.689 14:22:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:15.689 14:22:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:15.689 14:22:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:15.689 14:22:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.689 14:22:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.689 14:22:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:15.947 14:22:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:15.947 14:22:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:15.947 14:22:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.206 Waiting for block devices as requested 00:06:16.206 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:16.465 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:16.465 14:22:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:16.465 14:22:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:16.465 14:22:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:16.465 14:22:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:16.465 14:22:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:16.465 14:22:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:16.465 14:22:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:16.465 14:22:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:16.465 14:22:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:16.465 14:22:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:16.465 14:22:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:16.465 14:22:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:16.465 14:22:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:16.465 14:22:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:16.465 14:22:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:16.465 14:22:08 -- common/autotest_common.sh@1543 -- # continue 00:06:16.465 14:22:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:16.466 14:22:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:16.466 14:22:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:16.466 14:22:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:16.466 14:22:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:16.466 14:22:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:16.466 14:22:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:16.466 14:22:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:16.466 14:22:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:16.466 14:22:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:16.466 14:22:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:16.466 14:22:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:16.466 14:22:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:16.466 14:22:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:16.466 14:22:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:16.466 14:22:08 -- common/autotest_common.sh@1543 -- # continue 00:06:16.466 14:22:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:16.466 14:22:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.466 14:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.466 14:22:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:16.466 14:22:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.466 14:22:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.466 14:22:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.403 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.403 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.403 14:22:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:17.403 14:22:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.403 14:22:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.403 14:22:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:17.403 14:22:09 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:17.403 14:22:09 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:17.403 14:22:09 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:17.403 14:22:09 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:17.403 14:22:09 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:17.403 14:22:09 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:17.403 14:22:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:17.403 14:22:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:17.403 14:22:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:17.403 14:22:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.403 14:22:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.403 14:22:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:17.403 14:22:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:17.403 14:22:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:17.403 14:22:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:17.403 14:22:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:17.403 14:22:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:17.403 14:22:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.403 14:22:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:17.403 14:22:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:17.403 14:22:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:17.403 14:22:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.403 14:22:09 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:17.403 14:22:09 -- common/autotest_common.sh@1572 -- # return 0 00:06:17.403 14:22:09 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:17.403 14:22:09 -- common/autotest_common.sh@1580 -- # return 0 00:06:17.403 14:22:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:17.403 14:22:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:17.403 14:22:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.403 14:22:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.403 14:22:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:17.403 14:22:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.403 14:22:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.403 14:22:09 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:17.403 14:22:09 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.403 14:22:09 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.403 14:22:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.403 14:22:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.403 14:22:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.403 14:22:09 -- common/autotest_common.sh@10 -- # set +x 00:06:17.662 ************************************ 00:06:17.662 START TEST env 00:06:17.662 ************************************ 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.662 * Looking for test storage... 00:06:17.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.662 14:22:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.662 14:22:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.662 14:22:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.662 14:22:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.662 14:22:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.662 14:22:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.662 14:22:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.662 14:22:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.662 14:22:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.662 14:22:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.662 14:22:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.662 14:22:09 env -- scripts/common.sh@344 -- # case "$op" in 00:06:17.662 14:22:09 env -- scripts/common.sh@345 -- # : 1 00:06:17.662 14:22:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.662 14:22:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.662 14:22:09 env -- scripts/common.sh@365 -- # decimal 1 00:06:17.662 14:22:09 env -- scripts/common.sh@353 -- # local d=1 00:06:17.662 14:22:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.662 14:22:09 env -- scripts/common.sh@355 -- # echo 1 00:06:17.662 14:22:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.662 14:22:09 env -- scripts/common.sh@366 -- # decimal 2 00:06:17.662 14:22:09 env -- scripts/common.sh@353 -- # local d=2 00:06:17.662 14:22:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.662 14:22:09 env -- scripts/common.sh@355 -- # echo 2 00:06:17.662 14:22:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.662 14:22:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.662 14:22:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.662 14:22:09 env -- scripts/common.sh@368 -- # return 0 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.662 --rc genhtml_branch_coverage=1 00:06:17.662 --rc genhtml_function_coverage=1 00:06:17.662 --rc genhtml_legend=1 00:06:17.662 --rc geninfo_all_blocks=1 00:06:17.662 --rc geninfo_unexecuted_blocks=1 00:06:17.662 00:06:17.662 ' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.662 --rc genhtml_branch_coverage=1 00:06:17.662 --rc genhtml_function_coverage=1 00:06:17.662 --rc genhtml_legend=1 00:06:17.662 --rc geninfo_all_blocks=1 00:06:17.662 --rc geninfo_unexecuted_blocks=1 00:06:17.662 00:06:17.662 ' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.662 --rc genhtml_branch_coverage=1 00:06:17.662 --rc genhtml_function_coverage=1 00:06:17.662 --rc genhtml_legend=1 00:06:17.662 --rc geninfo_all_blocks=1 00:06:17.662 --rc geninfo_unexecuted_blocks=1 00:06:17.662 00:06:17.662 ' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.662 --rc genhtml_branch_coverage=1 00:06:17.662 --rc genhtml_function_coverage=1 00:06:17.662 --rc genhtml_legend=1 00:06:17.662 --rc geninfo_all_blocks=1 00:06:17.662 --rc geninfo_unexecuted_blocks=1 00:06:17.662 00:06:17.662 ' 00:06:17.662 14:22:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.662 14:22:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.662 14:22:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.662 ************************************ 00:06:17.662 START TEST env_memory 00:06:17.662 ************************************ 00:06:17.662 14:22:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:17.662 00:06:17.662 00:06:17.662 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.662 http://cunit.sourceforge.net/ 00:06:17.662 00:06:17.662 00:06:17.662 Suite: memory 00:06:17.662 Test: alloc and free memory map ...[2024-12-16 14:22:09.841426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:17.662 passed 00:06:17.921 Test: mem map translation ...[2024-12-16 14:22:09.872647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:17.921 [2024-12-16 14:22:09.872878] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:17.921 [2024-12-16 14:22:09.873184] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:17.921 [2024-12-16 14:22:09.873400] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:17.921 passed 00:06:17.921 Test: mem map registration ...[2024-12-16 14:22:09.937972] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:17.921 [2024-12-16 14:22:09.938214] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:17.921 passed 00:06:17.921 Test: mem map adjacent registrations ...passed 00:06:17.921 00:06:17.921 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.921 suites 1 1 n/a 0 0 00:06:17.921 tests 4 4 4 0 0 00:06:17.921 asserts 152 152 152 0 n/a 00:06:17.922 00:06:17.922 Elapsed time = 0.215 seconds 00:06:17.922 00:06:17.922 ************************************ 00:06:17.922 END TEST env_memory 00:06:17.922 ************************************ 00:06:17.922 real 0m0.232s 00:06:17.922 user 0m0.210s 00:06:17.922 sys 0m0.015s 00:06:17.922 14:22:10 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.922 14:22:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 14:22:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:17.922 14:22:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.922 14:22:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.922 14:22:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 ************************************ 00:06:17.922 START TEST env_vtophys 00:06:17.922 ************************************ 00:06:17.922 14:22:10 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:17.922 EAL: lib.eal log level changed from notice to debug 00:06:17.922 EAL: Detected lcore 0 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 1 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 2 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 3 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 4 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 5 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 6 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 7 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 8 as core 0 on socket 0 00:06:17.922 EAL: Detected lcore 9 as core 0 on socket 0 00:06:17.922 EAL: Maximum logical cores by configuration: 128 00:06:17.922 EAL: Detected CPU lcores: 10 00:06:17.922 EAL: Detected NUMA nodes: 1 00:06:17.922 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:17.922 EAL: Detected shared linkage of DPDK 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:17.922 EAL: Registered [vdev] bus. 00:06:17.922 EAL: bus.vdev log level changed from disabled to notice 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:17.922 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:17.922 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:17.922 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:17.922 EAL: No shared files mode enabled, IPC will be disabled 00:06:17.922 EAL: No shared files mode enabled, IPC is disabled 00:06:17.922 EAL: Selected IOVA mode 'PA' 00:06:17.922 EAL: Probing VFIO support... 00:06:17.922 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:17.922 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:17.922 EAL: Ask a virtual area of 0x2e000 bytes 00:06:17.922 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:17.922 EAL: Setting up physically contiguous memory... 00:06:17.922 EAL: Setting maximum number of open files to 524288 00:06:17.922 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:17.922 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:17.922 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.922 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:17.922 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.922 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.922 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:17.922 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:17.922 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.922 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:17.922 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.922 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.922 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:17.922 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:17.922 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.922 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:17.922 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.181 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.181 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.181 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.181 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.181 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.181 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.181 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.181 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.181 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.181 EAL: Hugepages will be freed exactly as allocated. 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: TSC frequency is ~2200000 KHz 00:06:18.181 EAL: Main lcore 0 is ready (tid=7f917ce4ca00;cpuset=[0]) 00:06:18.181 EAL: Trying to obtain current memory policy. 00:06:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.181 EAL: Restoring previous memory policy: 0 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.181 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:18.181 EAL: Mem event callback 'spdk:(nil)' registered 00:06:18.181 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:18.181 00:06:18.181 00:06:18.181 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.181 http://cunit.sourceforge.net/ 00:06:18.181 00:06:18.181 00:06:18.181 Suite: components_suite 00:06:18.181 Test: vtophys_malloc_test ...passed 00:06:18.181 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.181 EAL: Restoring previous memory policy: 4 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was expanded by 4MB 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was shrunk by 4MB 00:06:18.181 EAL: Trying to obtain current memory policy. 00:06:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.181 EAL: Restoring previous memory policy: 4 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was expanded by 6MB 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was shrunk by 6MB 00:06:18.181 EAL: Trying to obtain current memory policy. 00:06:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.181 EAL: Restoring previous memory policy: 4 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was expanded by 10MB 00:06:18.181 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.181 EAL: request: mp_malloc_sync 00:06:18.181 EAL: No shared files mode enabled, IPC is disabled 00:06:18.181 EAL: Heap on socket 0 was shrunk by 10MB 00:06:18.181 EAL: Trying to obtain current memory policy. 00:06:18.181 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.182 EAL: Restoring previous memory policy: 4 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was expanded by 18MB 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was shrunk by 18MB 00:06:18.182 EAL: Trying to obtain current memory policy. 00:06:18.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.182 EAL: Restoring previous memory policy: 4 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was expanded by 34MB 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was shrunk by 34MB 00:06:18.182 EAL: Trying to obtain current memory policy. 00:06:18.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.182 EAL: Restoring previous memory policy: 4 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was expanded by 66MB 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was shrunk by 66MB 00:06:18.182 EAL: Trying to obtain current memory policy. 00:06:18.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.182 EAL: Restoring previous memory policy: 4 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was expanded by 130MB 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was shrunk by 130MB 00:06:18.182 EAL: Trying to obtain current memory policy. 00:06:18.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.182 EAL: Restoring previous memory policy: 4 00:06:18.182 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.182 EAL: request: mp_malloc_sync 00:06:18.182 EAL: No shared files mode enabled, IPC is disabled 00:06:18.182 EAL: Heap on socket 0 was expanded by 258MB 00:06:18.441 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.441 EAL: request: mp_malloc_sync 00:06:18.441 EAL: No shared files mode enabled, IPC is disabled 00:06:18.441 EAL: Heap on socket 0 was shrunk by 258MB 00:06:18.441 EAL: Trying to obtain current memory policy. 00:06:18.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.441 EAL: Restoring previous memory policy: 4 00:06:18.441 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.441 EAL: request: mp_malloc_sync 00:06:18.441 EAL: No shared files mode enabled, IPC is disabled 00:06:18.441 EAL: Heap on socket 0 was expanded by 514MB 00:06:18.441 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.441 EAL: request: mp_malloc_sync 00:06:18.441 EAL: No shared files mode enabled, IPC is disabled 00:06:18.441 EAL: Heap on socket 0 was shrunk by 514MB 00:06:18.441 EAL: Trying to obtain current memory policy. 00:06:18.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.700 EAL: Restoring previous memory policy: 4 00:06:18.700 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.700 EAL: request: mp_malloc_sync 00:06:18.700 EAL: No shared files mode enabled, IPC is disabled 00:06:18.700 EAL: Heap on socket 0 was expanded by 1026MB 00:06:18.700 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.968 passed 00:06:18.968 00:06:18.968 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.968 suites 1 1 n/a 0 0 00:06:18.968 tests 2 2 2 0 0 00:06:18.968 asserts 5470 5470 5470 0 n/a 00:06:18.968 00:06:18.968 Elapsed time = 0.662 seconds 00:06:18.968 EAL: request: mp_malloc_sync 00:06:18.968 EAL: No shared files mode enabled, IPC is disabled 00:06:18.968 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:18.968 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.968 EAL: request: mp_malloc_sync 00:06:18.968 EAL: No shared files mode enabled, IPC is disabled 00:06:18.969 EAL: Heap on socket 0 was shrunk by 2MB 00:06:18.969 EAL: No shared files mode enabled, IPC is disabled 00:06:18.969 EAL: No shared files mode enabled, IPC is disabled 00:06:18.969 EAL: No shared files mode enabled, IPC is disabled 00:06:18.969 00:06:18.969 real 0m0.869s 00:06:18.969 user 0m0.436s 00:06:18.969 sys 0m0.303s 00:06:18.969 14:22:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.969 14:22:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:18.969 ************************************ 00:06:18.969 END TEST env_vtophys 00:06:18.969 ************************************ 00:06:18.969 14:22:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:18.969 14:22:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.969 14:22:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.969 14:22:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.969 ************************************ 00:06:18.969 START TEST env_pci 00:06:18.969 ************************************ 00:06:18.969 14:22:10 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:18.969 00:06:18.969 00:06:18.969 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.969 http://cunit.sourceforge.net/ 00:06:18.969 00:06:18.969 00:06:18.969 Suite: pci 00:06:18.969 Test: pci_hook ...[2024-12-16 14:22:11.007329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70643 has claimed it 00:06:18.969 passed 00:06:18.969 00:06:18.969 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.969 suites 1 1 n/a 0 0 00:06:18.969 tests 1 1 1 0 0 00:06:18.969 asserts 25 25 25 0 n/a 00:06:18.969 00:06:18.969 Elapsed time = 0.002 seconds 00:06:18.969 EAL: Cannot find device (10000:00:01.0) 00:06:18.969 EAL: Failed to attach device on primary process 00:06:18.969 00:06:18.969 real 0m0.017s 00:06:18.969 user 0m0.006s 00:06:18.969 sys 0m0.010s 00:06:18.969 14:22:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.969 14:22:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:18.969 ************************************ 00:06:18.969 END TEST env_pci 00:06:18.969 ************************************ 00:06:18.969 14:22:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:18.969 14:22:11 env -- env/env.sh@15 -- # uname 00:06:18.969 14:22:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:18.969 14:22:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:18.969 14:22:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.969 14:22:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:18.969 14:22:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.969 14:22:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.969 ************************************ 00:06:18.969 START TEST env_dpdk_post_init 00:06:18.969 ************************************ 00:06:18.969 14:22:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.969 EAL: Detected CPU lcores: 10 00:06:18.969 EAL: Detected NUMA nodes: 1 00:06:18.969 EAL: Detected shared linkage of DPDK 00:06:18.969 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.969 EAL: Selected IOVA mode 'PA' 00:06:19.227 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.227 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:19.227 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:19.227 Starting DPDK initialization... 00:06:19.227 Starting SPDK post initialization... 00:06:19.227 SPDK NVMe probe 00:06:19.227 Attaching to 0000:00:10.0 00:06:19.227 Attaching to 0000:00:11.0 00:06:19.227 Attached to 0000:00:10.0 00:06:19.227 Attached to 0000:00:11.0 00:06:19.227 Cleaning up... 00:06:19.227 00:06:19.227 real 0m0.181s 00:06:19.227 user 0m0.047s 00:06:19.227 sys 0m0.033s 00:06:19.227 14:22:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.227 14:22:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.227 ************************************ 00:06:19.227 END TEST env_dpdk_post_init 00:06:19.227 ************************************ 00:06:19.227 14:22:11 env -- env/env.sh@26 -- # uname 00:06:19.227 14:22:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:19.227 14:22:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:19.227 14:22:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.227 14:22:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.227 14:22:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.227 ************************************ 00:06:19.227 START TEST env_mem_callbacks 00:06:19.227 ************************************ 00:06:19.227 14:22:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:19.227 EAL: Detected CPU lcores: 10 00:06:19.227 EAL: Detected NUMA nodes: 1 00:06:19.227 EAL: Detected shared linkage of DPDK 00:06:19.227 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:19.227 EAL: Selected IOVA mode 'PA' 00:06:19.486 00:06:19.486 00:06:19.486 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.486 http://cunit.sourceforge.net/ 00:06:19.486 00:06:19.486 00:06:19.486 Suite: memory 00:06:19.486 Test: test ... 00:06:19.486 register 0x200000200000 2097152 00:06:19.486 malloc 3145728 00:06:19.486 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.486 register 0x200000400000 4194304 00:06:19.486 buf 0x200000500000 len 3145728 PASSED 00:06:19.486 malloc 64 00:06:19.486 buf 0x2000004fff40 len 64 PASSED 00:06:19.486 malloc 4194304 00:06:19.486 register 0x200000800000 6291456 00:06:19.486 buf 0x200000a00000 len 4194304 PASSED 00:06:19.486 free 0x200000500000 3145728 00:06:19.486 free 0x2000004fff40 64 00:06:19.486 unregister 0x200000400000 4194304 PASSED 00:06:19.486 free 0x200000a00000 4194304 00:06:19.486 unregister 0x200000800000 6291456 PASSED 00:06:19.486 malloc 8388608 00:06:19.486 register 0x200000400000 10485760 00:06:19.486 buf 0x200000600000 len 8388608 PASSED 00:06:19.486 free 0x200000600000 8388608 00:06:19.486 unregister 0x200000400000 10485760 PASSED 00:06:19.486 passed 00:06:19.486 00:06:19.486 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.486 suites 1 1 n/a 0 0 00:06:19.486 tests 1 1 1 0 0 00:06:19.486 asserts 15 15 15 0 n/a 00:06:19.486 00:06:19.486 Elapsed time = 0.005 seconds 00:06:19.486 00:06:19.486 real 0m0.132s 00:06:19.486 user 0m0.011s 00:06:19.486 sys 0m0.021s 00:06:19.486 14:22:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.486 14:22:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:19.486 ************************************ 00:06:19.486 END TEST env_mem_callbacks 00:06:19.486 ************************************ 00:06:19.486 00:06:19.486 real 0m1.877s 00:06:19.486 user 0m0.896s 00:06:19.486 sys 0m0.620s 00:06:19.486 14:22:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.486 14:22:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.486 ************************************ 00:06:19.486 END TEST env 00:06:19.486 ************************************ 00:06:19.486 14:22:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.486 14:22:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.486 14:22:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.486 14:22:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.486 ************************************ 00:06:19.486 START TEST rpc 00:06:19.486 ************************************ 00:06:19.486 14:22:11 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.486 * Looking for test storage... 00:06:19.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.486 14:22:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.486 14:22:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.486 14:22:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.747 14:22:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.747 14:22:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.747 14:22:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.747 14:22:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.747 14:22:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.747 14:22:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.747 14:22:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.747 14:22:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.747 14:22:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.747 14:22:11 rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.747 14:22:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.747 14:22:11 rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.747 14:22:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.747 14:22:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.747 14:22:11 rpc -- scripts/common.sh@368 -- # return 0 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.747 --rc genhtml_branch_coverage=1 00:06:19.747 --rc genhtml_function_coverage=1 00:06:19.747 --rc genhtml_legend=1 00:06:19.747 --rc geninfo_all_blocks=1 00:06:19.747 --rc geninfo_unexecuted_blocks=1 00:06:19.747 00:06:19.747 ' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.747 --rc genhtml_branch_coverage=1 00:06:19.747 --rc genhtml_function_coverage=1 00:06:19.747 --rc genhtml_legend=1 00:06:19.747 --rc geninfo_all_blocks=1 00:06:19.747 --rc geninfo_unexecuted_blocks=1 00:06:19.747 00:06:19.747 ' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.747 --rc genhtml_branch_coverage=1 00:06:19.747 --rc genhtml_function_coverage=1 00:06:19.747 --rc genhtml_legend=1 00:06:19.747 --rc geninfo_all_blocks=1 00:06:19.747 --rc geninfo_unexecuted_blocks=1 00:06:19.747 00:06:19.747 ' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.747 --rc genhtml_branch_coverage=1 00:06:19.747 --rc genhtml_function_coverage=1 00:06:19.747 --rc genhtml_legend=1 00:06:19.747 --rc geninfo_all_blocks=1 00:06:19.747 --rc geninfo_unexecuted_blocks=1 00:06:19.747 00:06:19.747 ' 00:06:19.747 14:22:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70761 00:06:19.747 14:22:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:19.747 14:22:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.747 14:22:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70761 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 70761 ']' 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.747 14:22:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.748 14:22:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.748 14:22:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.748 14:22:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.748 [2024-12-16 14:22:11.755505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:19.748 [2024-12-16 14:22:11.755600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70761 ] 00:06:19.748 [2024-12-16 14:22:11.892711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.748 [2024-12-16 14:22:11.912326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:19.748 [2024-12-16 14:22:11.912395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70761' to capture a snapshot of events at runtime. 00:06:19.748 [2024-12-16 14:22:11.912422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.748 [2024-12-16 14:22:11.912429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.748 [2024-12-16 14:22:11.912435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70761 for offline analysis/debug. 00:06:19.748 [2024-12-16 14:22:11.912742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.008 [2024-12-16 14:22:11.948265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.009 14:22:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.009 14:22:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.009 14:22:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.009 14:22:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.009 14:22:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.009 14:22:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.009 14:22:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.009 14:22:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.009 14:22:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 ************************************ 00:06:20.009 START TEST rpc_integrity 00:06:20.009 ************************************ 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.009 { 00:06:20.009 "name": "Malloc0", 00:06:20.009 "aliases": [ 00:06:20.009 "7b6c510e-82af-4eec-a4cc-d04139ae8fc6" 00:06:20.009 ], 00:06:20.009 "product_name": "Malloc disk", 00:06:20.009 "block_size": 512, 00:06:20.009 "num_blocks": 16384, 00:06:20.009 "uuid": "7b6c510e-82af-4eec-a4cc-d04139ae8fc6", 00:06:20.009 "assigned_rate_limits": { 00:06:20.009 "rw_ios_per_sec": 0, 00:06:20.009 "rw_mbytes_per_sec": 0, 00:06:20.009 "r_mbytes_per_sec": 0, 00:06:20.009 "w_mbytes_per_sec": 0 00:06:20.009 }, 00:06:20.009 "claimed": false, 00:06:20.009 "zoned": false, 00:06:20.009 "supported_io_types": { 00:06:20.009 "read": true, 00:06:20.009 "write": true, 00:06:20.009 "unmap": true, 00:06:20.009 "flush": true, 00:06:20.009 "reset": true, 00:06:20.009 "nvme_admin": false, 00:06:20.009 "nvme_io": false, 00:06:20.009 "nvme_io_md": false, 00:06:20.009 "write_zeroes": true, 00:06:20.009 "zcopy": true, 00:06:20.009 "get_zone_info": false, 00:06:20.009 "zone_management": false, 00:06:20.009 "zone_append": false, 00:06:20.009 "compare": false, 00:06:20.009 "compare_and_write": false, 00:06:20.009 "abort": true, 00:06:20.009 "seek_hole": false, 00:06:20.009 "seek_data": false, 00:06:20.009 "copy": true, 00:06:20.009 "nvme_iov_md": false 00:06:20.009 }, 00:06:20.009 "memory_domains": [ 00:06:20.009 { 00:06:20.009 "dma_device_id": "system", 00:06:20.009 "dma_device_type": 1 00:06:20.009 }, 00:06:20.009 { 00:06:20.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.009 "dma_device_type": 2 00:06:20.009 } 00:06:20.009 ], 00:06:20.009 "driver_specific": {} 00:06:20.009 } 00:06:20.009 ]' 00:06:20.009 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:20.268 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.268 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.268 [2024-12-16 14:22:12.232776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:20.268 [2024-12-16 14:22:12.232895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.268 [2024-12-16 14:22:12.232932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb08d40 00:06:20.268 [2024-12-16 14:22:12.232942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.268 [2024-12-16 14:22:12.234575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.268 [2024-12-16 14:22:12.234610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.268 Passthru0 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.268 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.268 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.268 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.268 { 00:06:20.268 "name": "Malloc0", 00:06:20.268 "aliases": [ 00:06:20.268 "7b6c510e-82af-4eec-a4cc-d04139ae8fc6" 00:06:20.268 ], 00:06:20.268 "product_name": "Malloc disk", 00:06:20.268 "block_size": 512, 00:06:20.268 "num_blocks": 16384, 00:06:20.268 "uuid": "7b6c510e-82af-4eec-a4cc-d04139ae8fc6", 00:06:20.268 "assigned_rate_limits": { 00:06:20.268 "rw_ios_per_sec": 0, 00:06:20.268 "rw_mbytes_per_sec": 0, 00:06:20.268 "r_mbytes_per_sec": 0, 00:06:20.268 "w_mbytes_per_sec": 0 00:06:20.268 }, 00:06:20.268 "claimed": true, 00:06:20.268 "claim_type": "exclusive_write", 00:06:20.268 "zoned": false, 00:06:20.268 "supported_io_types": { 00:06:20.268 "read": true, 00:06:20.268 "write": true, 00:06:20.268 "unmap": true, 00:06:20.268 "flush": true, 00:06:20.268 "reset": true, 00:06:20.269 "nvme_admin": false, 00:06:20.269 "nvme_io": false, 00:06:20.269 "nvme_io_md": false, 00:06:20.269 "write_zeroes": true, 00:06:20.269 "zcopy": true, 00:06:20.269 "get_zone_info": false, 00:06:20.269 "zone_management": false, 00:06:20.269 "zone_append": false, 00:06:20.269 "compare": false, 00:06:20.269 "compare_and_write": false, 00:06:20.269 "abort": true, 00:06:20.269 "seek_hole": false, 00:06:20.269 "seek_data": false, 00:06:20.269 "copy": true, 00:06:20.269 "nvme_iov_md": false 00:06:20.269 }, 00:06:20.269 "memory_domains": [ 00:06:20.269 { 00:06:20.269 "dma_device_id": "system", 00:06:20.269 "dma_device_type": 1 00:06:20.269 }, 00:06:20.269 { 00:06:20.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.269 "dma_device_type": 2 00:06:20.269 } 00:06:20.269 ], 00:06:20.269 "driver_specific": {} 00:06:20.269 }, 00:06:20.269 { 00:06:20.269 "name": "Passthru0", 00:06:20.269 "aliases": [ 00:06:20.269 "5abfc9b9-1cbb-5bc4-a1e1-bfa9bd257a18" 00:06:20.269 ], 00:06:20.269 "product_name": "passthru", 00:06:20.269 "block_size": 512, 00:06:20.269 "num_blocks": 16384, 00:06:20.269 "uuid": "5abfc9b9-1cbb-5bc4-a1e1-bfa9bd257a18", 00:06:20.269 "assigned_rate_limits": { 00:06:20.269 "rw_ios_per_sec": 0, 00:06:20.269 "rw_mbytes_per_sec": 0, 00:06:20.269 "r_mbytes_per_sec": 0, 00:06:20.269 "w_mbytes_per_sec": 0 00:06:20.269 }, 00:06:20.269 "claimed": false, 00:06:20.269 "zoned": false, 00:06:20.269 "supported_io_types": { 00:06:20.269 "read": true, 00:06:20.269 "write": true, 00:06:20.269 "unmap": true, 00:06:20.269 "flush": true, 00:06:20.269 "reset": true, 00:06:20.269 "nvme_admin": false, 00:06:20.269 "nvme_io": false, 00:06:20.269 "nvme_io_md": false, 00:06:20.269 "write_zeroes": true, 00:06:20.269 "zcopy": true, 00:06:20.269 "get_zone_info": false, 00:06:20.269 "zone_management": false, 00:06:20.269 "zone_append": false, 00:06:20.269 "compare": false, 00:06:20.269 "compare_and_write": false, 00:06:20.269 "abort": true, 00:06:20.269 "seek_hole": false, 00:06:20.269 "seek_data": false, 00:06:20.269 "copy": true, 00:06:20.269 "nvme_iov_md": false 00:06:20.269 }, 00:06:20.269 "memory_domains": [ 00:06:20.269 { 00:06:20.269 "dma_device_id": "system", 00:06:20.269 "dma_device_type": 1 00:06:20.269 }, 00:06:20.269 { 00:06:20.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.269 "dma_device_type": 2 00:06:20.269 } 00:06:20.269 ], 00:06:20.269 "driver_specific": { 00:06:20.269 "passthru": { 00:06:20.269 "name": "Passthru0", 00:06:20.269 "base_bdev_name": "Malloc0" 00:06:20.269 } 00:06:20.269 } 00:06:20.269 } 00:06:20.269 ]' 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:20.269 14:22:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.269 00:06:20.269 real 0m0.321s 00:06:20.269 user 0m0.215s 00:06:20.269 sys 0m0.038s 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 ************************************ 00:06:20.269 END TEST rpc_integrity 00:06:20.269 ************************************ 00:06:20.269 14:22:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:20.269 14:22:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.269 14:22:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.269 14:22:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 ************************************ 00:06:20.269 START TEST rpc_plugins 00:06:20.269 ************************************ 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:20.269 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.269 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:20.269 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.269 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:20.528 { 00:06:20.528 "name": "Malloc1", 00:06:20.528 "aliases": [ 00:06:20.528 "79e899c3-d879-49f5-b203-3c3f17d7e3b3" 00:06:20.528 ], 00:06:20.528 "product_name": "Malloc disk", 00:06:20.528 "block_size": 4096, 00:06:20.528 "num_blocks": 256, 00:06:20.528 "uuid": "79e899c3-d879-49f5-b203-3c3f17d7e3b3", 00:06:20.528 "assigned_rate_limits": { 00:06:20.528 "rw_ios_per_sec": 0, 00:06:20.528 "rw_mbytes_per_sec": 0, 00:06:20.528 "r_mbytes_per_sec": 0, 00:06:20.528 "w_mbytes_per_sec": 0 00:06:20.528 }, 00:06:20.528 "claimed": false, 00:06:20.528 "zoned": false, 00:06:20.528 "supported_io_types": { 00:06:20.528 "read": true, 00:06:20.528 "write": true, 00:06:20.528 "unmap": true, 00:06:20.528 "flush": true, 00:06:20.528 "reset": true, 00:06:20.528 "nvme_admin": false, 00:06:20.528 "nvme_io": false, 00:06:20.528 "nvme_io_md": false, 00:06:20.528 "write_zeroes": true, 00:06:20.528 "zcopy": true, 00:06:20.528 "get_zone_info": false, 00:06:20.528 "zone_management": false, 00:06:20.528 "zone_append": false, 00:06:20.528 "compare": false, 00:06:20.528 "compare_and_write": false, 00:06:20.528 "abort": true, 00:06:20.528 "seek_hole": false, 00:06:20.528 "seek_data": false, 00:06:20.528 "copy": true, 00:06:20.528 "nvme_iov_md": false 00:06:20.528 }, 00:06:20.528 "memory_domains": [ 00:06:20.528 { 00:06:20.528 "dma_device_id": "system", 00:06:20.528 "dma_device_type": 1 00:06:20.528 }, 00:06:20.528 { 00:06:20.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.528 "dma_device_type": 2 00:06:20.528 } 00:06:20.528 ], 00:06:20.528 "driver_specific": {} 00:06:20.528 } 00:06:20.528 ]' 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:20.528 14:22:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:20.528 00:06:20.528 real 0m0.157s 00:06:20.528 user 0m0.106s 00:06:20.528 sys 0m0.018s 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.528 14:22:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 ************************************ 00:06:20.528 END TEST rpc_plugins 00:06:20.528 ************************************ 00:06:20.528 14:22:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:20.528 14:22:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.528 14:22:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.528 14:22:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 ************************************ 00:06:20.528 START TEST rpc_trace_cmd_test 00:06:20.528 ************************************ 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.528 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:20.528 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70761", 00:06:20.528 "tpoint_group_mask": "0x8", 00:06:20.528 "iscsi_conn": { 00:06:20.528 "mask": "0x2", 00:06:20.528 "tpoint_mask": "0x0" 00:06:20.528 }, 00:06:20.528 "scsi": { 00:06:20.528 "mask": "0x4", 00:06:20.528 "tpoint_mask": "0x0" 00:06:20.528 }, 00:06:20.528 "bdev": { 00:06:20.528 "mask": "0x8", 00:06:20.528 "tpoint_mask": "0xffffffffffffffff" 00:06:20.528 }, 00:06:20.528 "nvmf_rdma": { 00:06:20.528 "mask": "0x10", 00:06:20.528 "tpoint_mask": "0x0" 00:06:20.528 }, 00:06:20.529 "nvmf_tcp": { 00:06:20.529 "mask": "0x20", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "ftl": { 00:06:20.529 "mask": "0x40", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "blobfs": { 00:06:20.529 "mask": "0x80", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "dsa": { 00:06:20.529 "mask": "0x200", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "thread": { 00:06:20.529 "mask": "0x400", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "nvme_pcie": { 00:06:20.529 "mask": "0x800", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "iaa": { 00:06:20.529 "mask": "0x1000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "nvme_tcp": { 00:06:20.529 "mask": "0x2000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "bdev_nvme": { 00:06:20.529 "mask": "0x4000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "sock": { 00:06:20.529 "mask": "0x8000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "blob": { 00:06:20.529 "mask": "0x10000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "bdev_raid": { 00:06:20.529 "mask": "0x20000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 }, 00:06:20.529 "scheduler": { 00:06:20.529 "mask": "0x40000", 00:06:20.529 "tpoint_mask": "0x0" 00:06:20.529 } 00:06:20.529 }' 00:06:20.529 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:20.788 00:06:20.788 real 0m0.276s 00:06:20.788 user 0m0.239s 00:06:20.788 sys 0m0.028s 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.788 ************************************ 00:06:20.788 END TEST rpc_trace_cmd_test 00:06:20.788 14:22:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.788 ************************************ 00:06:20.788 14:22:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:20.788 14:22:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:20.788 14:22:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:20.788 14:22:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.788 14:22:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.788 14:22:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.788 ************************************ 00:06:20.788 START TEST rpc_daemon_integrity 00:06:20.788 ************************************ 00:06:20.788 14:22:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.047 14:22:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.047 { 00:06:21.047 "name": "Malloc2", 00:06:21.047 "aliases": [ 00:06:21.047 "cc576f04-56db-4e5f-850f-921d9ceeb629" 00:06:21.047 ], 00:06:21.047 "product_name": "Malloc disk", 00:06:21.047 "block_size": 512, 00:06:21.047 "num_blocks": 16384, 00:06:21.047 "uuid": "cc576f04-56db-4e5f-850f-921d9ceeb629", 00:06:21.047 "assigned_rate_limits": { 00:06:21.047 "rw_ios_per_sec": 0, 00:06:21.047 "rw_mbytes_per_sec": 0, 00:06:21.047 "r_mbytes_per_sec": 0, 00:06:21.047 "w_mbytes_per_sec": 0 00:06:21.047 }, 00:06:21.047 "claimed": false, 00:06:21.047 "zoned": false, 00:06:21.047 "supported_io_types": { 00:06:21.047 "read": true, 00:06:21.047 "write": true, 00:06:21.047 "unmap": true, 00:06:21.047 "flush": true, 00:06:21.047 "reset": true, 00:06:21.047 "nvme_admin": false, 00:06:21.047 "nvme_io": false, 00:06:21.047 "nvme_io_md": false, 00:06:21.047 "write_zeroes": true, 00:06:21.047 "zcopy": true, 00:06:21.047 "get_zone_info": false, 00:06:21.047 "zone_management": false, 00:06:21.047 "zone_append": false, 00:06:21.047 "compare": false, 00:06:21.047 "compare_and_write": false, 00:06:21.047 "abort": true, 00:06:21.047 "seek_hole": false, 00:06:21.047 "seek_data": false, 00:06:21.047 "copy": true, 00:06:21.047 "nvme_iov_md": false 00:06:21.047 }, 00:06:21.047 "memory_domains": [ 00:06:21.047 { 00:06:21.047 "dma_device_id": "system", 00:06:21.047 "dma_device_type": 1 00:06:21.047 }, 00:06:21.047 { 00:06:21.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.047 "dma_device_type": 2 00:06:21.047 } 00:06:21.047 ], 00:06:21.047 "driver_specific": {} 00:06:21.047 } 00:06:21.047 ]' 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.047 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.048 [2024-12-16 14:22:13.145177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.048 [2024-12-16 14:22:13.145249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.048 [2024-12-16 14:22:13.145264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad6f10 00:06:21.048 [2024-12-16 14:22:13.145272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.048 [2024-12-16 14:22:13.146564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.048 [2024-12-16 14:22:13.146614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.048 Passthru0 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.048 { 00:06:21.048 "name": "Malloc2", 00:06:21.048 "aliases": [ 00:06:21.048 "cc576f04-56db-4e5f-850f-921d9ceeb629" 00:06:21.048 ], 00:06:21.048 "product_name": "Malloc disk", 00:06:21.048 "block_size": 512, 00:06:21.048 "num_blocks": 16384, 00:06:21.048 "uuid": "cc576f04-56db-4e5f-850f-921d9ceeb629", 00:06:21.048 "assigned_rate_limits": { 00:06:21.048 "rw_ios_per_sec": 0, 00:06:21.048 "rw_mbytes_per_sec": 0, 00:06:21.048 "r_mbytes_per_sec": 0, 00:06:21.048 "w_mbytes_per_sec": 0 00:06:21.048 }, 00:06:21.048 "claimed": true, 00:06:21.048 "claim_type": "exclusive_write", 00:06:21.048 "zoned": false, 00:06:21.048 "supported_io_types": { 00:06:21.048 "read": true, 00:06:21.048 "write": true, 00:06:21.048 "unmap": true, 00:06:21.048 "flush": true, 00:06:21.048 "reset": true, 00:06:21.048 "nvme_admin": false, 00:06:21.048 "nvme_io": false, 00:06:21.048 "nvme_io_md": false, 00:06:21.048 "write_zeroes": true, 00:06:21.048 "zcopy": true, 00:06:21.048 "get_zone_info": false, 00:06:21.048 "zone_management": false, 00:06:21.048 "zone_append": false, 00:06:21.048 "compare": false, 00:06:21.048 "compare_and_write": false, 00:06:21.048 "abort": true, 00:06:21.048 "seek_hole": false, 00:06:21.048 "seek_data": false, 00:06:21.048 "copy": true, 00:06:21.048 "nvme_iov_md": false 00:06:21.048 }, 00:06:21.048 "memory_domains": [ 00:06:21.048 { 00:06:21.048 "dma_device_id": "system", 00:06:21.048 "dma_device_type": 1 00:06:21.048 }, 00:06:21.048 { 00:06:21.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.048 "dma_device_type": 2 00:06:21.048 } 00:06:21.048 ], 00:06:21.048 "driver_specific": {} 00:06:21.048 }, 00:06:21.048 { 00:06:21.048 "name": "Passthru0", 00:06:21.048 "aliases": [ 00:06:21.048 "b124ff3f-e80c-54d9-b1eb-5e04c1bf27d8" 00:06:21.048 ], 00:06:21.048 "product_name": "passthru", 00:06:21.048 "block_size": 512, 00:06:21.048 "num_blocks": 16384, 00:06:21.048 "uuid": "b124ff3f-e80c-54d9-b1eb-5e04c1bf27d8", 00:06:21.048 "assigned_rate_limits": { 00:06:21.048 "rw_ios_per_sec": 0, 00:06:21.048 "rw_mbytes_per_sec": 0, 00:06:21.048 "r_mbytes_per_sec": 0, 00:06:21.048 "w_mbytes_per_sec": 0 00:06:21.048 }, 00:06:21.048 "claimed": false, 00:06:21.048 "zoned": false, 00:06:21.048 "supported_io_types": { 00:06:21.048 "read": true, 00:06:21.048 "write": true, 00:06:21.048 "unmap": true, 00:06:21.048 "flush": true, 00:06:21.048 "reset": true, 00:06:21.048 "nvme_admin": false, 00:06:21.048 "nvme_io": false, 00:06:21.048 "nvme_io_md": false, 00:06:21.048 "write_zeroes": true, 00:06:21.048 "zcopy": true, 00:06:21.048 "get_zone_info": false, 00:06:21.048 "zone_management": false, 00:06:21.048 "zone_append": false, 00:06:21.048 "compare": false, 00:06:21.048 "compare_and_write": false, 00:06:21.048 "abort": true, 00:06:21.048 "seek_hole": false, 00:06:21.048 "seek_data": false, 00:06:21.048 "copy": true, 00:06:21.048 "nvme_iov_md": false 00:06:21.048 }, 00:06:21.048 "memory_domains": [ 00:06:21.048 { 00:06:21.048 "dma_device_id": "system", 00:06:21.048 "dma_device_type": 1 00:06:21.048 }, 00:06:21.048 { 00:06:21.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.048 "dma_device_type": 2 00:06:21.048 } 00:06:21.048 ], 00:06:21.048 "driver_specific": { 00:06:21.048 "passthru": { 00:06:21.048 "name": "Passthru0", 00:06:21.048 "base_bdev_name": "Malloc2" 00:06:21.048 } 00:06:21.048 } 00:06:21.048 } 00:06:21.048 ]' 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.048 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.307 00:06:21.307 real 0m0.330s 00:06:21.307 user 0m0.225s 00:06:21.307 sys 0m0.037s 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.307 14:22:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.307 ************************************ 00:06:21.307 END TEST rpc_daemon_integrity 00:06:21.307 ************************************ 00:06:21.307 14:22:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:21.307 14:22:13 rpc -- rpc/rpc.sh@84 -- # killprocess 70761 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 70761 ']' 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@958 -- # kill -0 70761 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70761 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.307 killing process with pid 70761 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70761' 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@973 -- # kill 70761 00:06:21.307 14:22:13 rpc -- common/autotest_common.sh@978 -- # wait 70761 00:06:21.566 00:06:21.566 real 0m2.082s 00:06:21.566 user 0m2.893s 00:06:21.566 sys 0m0.522s 00:06:21.566 14:22:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.566 14:22:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.566 ************************************ 00:06:21.566 END TEST rpc 00:06:21.566 ************************************ 00:06:21.566 14:22:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.566 14:22:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.566 14:22:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.566 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.566 ************************************ 00:06:21.566 START TEST skip_rpc 00:06:21.567 ************************************ 00:06:21.567 14:22:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.567 * Looking for test storage... 00:06:21.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.567 14:22:13 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.567 14:22:13 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.567 14:22:13 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.825 14:22:13 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.825 14:22:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.826 14:22:13 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.826 --rc genhtml_branch_coverage=1 00:06:21.826 --rc genhtml_function_coverage=1 00:06:21.826 --rc genhtml_legend=1 00:06:21.826 --rc geninfo_all_blocks=1 00:06:21.826 --rc geninfo_unexecuted_blocks=1 00:06:21.826 00:06:21.826 ' 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.826 --rc genhtml_branch_coverage=1 00:06:21.826 --rc genhtml_function_coverage=1 00:06:21.826 --rc genhtml_legend=1 00:06:21.826 --rc geninfo_all_blocks=1 00:06:21.826 --rc geninfo_unexecuted_blocks=1 00:06:21.826 00:06:21.826 ' 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.826 --rc genhtml_branch_coverage=1 00:06:21.826 --rc genhtml_function_coverage=1 00:06:21.826 --rc genhtml_legend=1 00:06:21.826 --rc geninfo_all_blocks=1 00:06:21.826 --rc geninfo_unexecuted_blocks=1 00:06:21.826 00:06:21.826 ' 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.826 --rc genhtml_branch_coverage=1 00:06:21.826 --rc genhtml_function_coverage=1 00:06:21.826 --rc genhtml_legend=1 00:06:21.826 --rc geninfo_all_blocks=1 00:06:21.826 --rc geninfo_unexecuted_blocks=1 00:06:21.826 00:06:21.826 ' 00:06:21.826 14:22:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.826 14:22:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.826 14:22:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.826 14:22:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.826 ************************************ 00:06:21.826 START TEST skip_rpc 00:06:21.826 ************************************ 00:06:21.826 14:22:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:21.826 14:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70954 00:06:21.826 14:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:21.826 14:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.826 14:22:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:21.826 [2024-12-16 14:22:13.918969] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.826 [2024-12-16 14:22:13.919107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:06:22.085 [2024-12-16 14:22:14.067618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.085 [2024-12-16 14:22:14.089391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.085 [2024-12-16 14:22:14.123209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70954 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70954 ']' 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70954 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70954 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70954' 00:06:27.355 killing process with pid 70954 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70954 00:06:27.355 14:22:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70954 00:06:27.355 00:06:27.355 real 0m5.246s 00:06:27.355 user 0m4.995s 00:06:27.355 sys 0m0.169s 00:06:27.355 14:22:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 ************************************ 00:06:27.355 END TEST skip_rpc 00:06:27.355 ************************************ 00:06:27.355 14:22:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:27.355 14:22:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.355 14:22:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 ************************************ 00:06:27.355 START TEST skip_rpc_with_json 00:06:27.355 ************************************ 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71035 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71035 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71035 ']' 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 [2024-12-16 14:22:19.219709] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:27.355 [2024-12-16 14:22:19.219817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71035 ] 00:06:27.355 [2024-12-16 14:22:19.364202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.355 [2024-12-16 14:22:19.382507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.355 [2024-12-16 14:22:19.415621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 [2024-12-16 14:22:19.527636] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:27.355 request: 00:06:27.355 { 00:06:27.355 "trtype": "tcp", 00:06:27.355 "method": "nvmf_get_transports", 00:06:27.355 "req_id": 1 00:06:27.355 } 00:06:27.355 Got JSON-RPC error response 00:06:27.355 response: 00:06:27.355 { 00:06:27.355 "code": -19, 00:06:27.355 "message": "No such device" 00:06:27.355 } 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 [2024-12-16 14:22:19.539738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.355 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.615 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.615 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.615 { 00:06:27.615 "subsystems": [ 00:06:27.615 { 00:06:27.615 "subsystem": "fsdev", 00:06:27.615 "config": [ 00:06:27.615 { 00:06:27.615 "method": "fsdev_set_opts", 00:06:27.615 "params": { 00:06:27.615 "fsdev_io_pool_size": 65535, 00:06:27.615 "fsdev_io_cache_size": 256 00:06:27.615 } 00:06:27.615 } 00:06:27.615 ] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "keyring", 00:06:27.615 "config": [] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "iobuf", 00:06:27.615 "config": [ 00:06:27.615 { 00:06:27.615 "method": "iobuf_set_options", 00:06:27.615 "params": { 00:06:27.615 "small_pool_count": 8192, 00:06:27.615 "large_pool_count": 1024, 00:06:27.615 "small_bufsize": 8192, 00:06:27.615 "large_bufsize": 135168, 00:06:27.615 "enable_numa": false 00:06:27.615 } 00:06:27.615 } 00:06:27.615 ] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "sock", 00:06:27.615 "config": [ 00:06:27.615 { 00:06:27.615 "method": "sock_set_default_impl", 00:06:27.615 "params": { 00:06:27.615 "impl_name": "uring" 00:06:27.615 } 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "method": "sock_impl_set_options", 00:06:27.615 "params": { 00:06:27.615 "impl_name": "ssl", 00:06:27.615 "recv_buf_size": 4096, 00:06:27.615 "send_buf_size": 4096, 00:06:27.615 "enable_recv_pipe": true, 00:06:27.615 "enable_quickack": false, 00:06:27.615 "enable_placement_id": 0, 00:06:27.615 "enable_zerocopy_send_server": true, 00:06:27.615 "enable_zerocopy_send_client": false, 00:06:27.615 "zerocopy_threshold": 0, 00:06:27.615 "tls_version": 0, 00:06:27.615 "enable_ktls": false 00:06:27.615 } 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "method": "sock_impl_set_options", 00:06:27.615 "params": { 00:06:27.615 "impl_name": "posix", 00:06:27.615 "recv_buf_size": 2097152, 00:06:27.615 "send_buf_size": 2097152, 00:06:27.615 "enable_recv_pipe": true, 00:06:27.615 "enable_quickack": false, 00:06:27.615 "enable_placement_id": 0, 00:06:27.615 "enable_zerocopy_send_server": true, 00:06:27.615 "enable_zerocopy_send_client": false, 00:06:27.615 "zerocopy_threshold": 0, 00:06:27.615 "tls_version": 0, 00:06:27.615 "enable_ktls": false 00:06:27.615 } 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "method": "sock_impl_set_options", 00:06:27.615 "params": { 00:06:27.615 "impl_name": "uring", 00:06:27.615 "recv_buf_size": 2097152, 00:06:27.615 "send_buf_size": 2097152, 00:06:27.615 "enable_recv_pipe": true, 00:06:27.615 "enable_quickack": false, 00:06:27.615 "enable_placement_id": 0, 00:06:27.615 "enable_zerocopy_send_server": false, 00:06:27.615 "enable_zerocopy_send_client": false, 00:06:27.615 "zerocopy_threshold": 0, 00:06:27.615 "tls_version": 0, 00:06:27.615 "enable_ktls": false 00:06:27.615 } 00:06:27.615 } 00:06:27.615 ] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "vmd", 00:06:27.615 "config": [] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "accel", 00:06:27.615 "config": [ 00:06:27.615 { 00:06:27.615 "method": "accel_set_options", 00:06:27.615 "params": { 00:06:27.615 "small_cache_size": 128, 00:06:27.615 "large_cache_size": 16, 00:06:27.615 "task_count": 2048, 00:06:27.615 "sequence_count": 2048, 00:06:27.615 "buf_count": 2048 00:06:27.615 } 00:06:27.615 } 00:06:27.615 ] 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "subsystem": "bdev", 00:06:27.615 "config": [ 00:06:27.615 { 00:06:27.615 "method": "bdev_set_options", 00:06:27.615 "params": { 00:06:27.615 "bdev_io_pool_size": 65535, 00:06:27.615 "bdev_io_cache_size": 256, 00:06:27.615 "bdev_auto_examine": true, 00:06:27.615 "iobuf_small_cache_size": 128, 00:06:27.615 "iobuf_large_cache_size": 16 00:06:27.615 } 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "method": "bdev_raid_set_options", 00:06:27.615 "params": { 00:06:27.615 "process_window_size_kb": 1024, 00:06:27.615 "process_max_bandwidth_mb_sec": 0 00:06:27.615 } 00:06:27.615 }, 00:06:27.615 { 00:06:27.615 "method": "bdev_iscsi_set_options", 00:06:27.615 "params": { 00:06:27.615 "timeout_sec": 30 00:06:27.615 } 00:06:27.615 }, 00:06:27.616 { 00:06:27.616 "method": "bdev_nvme_set_options", 00:06:27.616 "params": { 00:06:27.616 "action_on_timeout": "none", 00:06:27.616 "timeout_us": 0, 00:06:27.616 "timeout_admin_us": 0, 00:06:27.616 "keep_alive_timeout_ms": 10000, 00:06:27.616 "arbitration_burst": 0, 00:06:27.616 "low_priority_weight": 0, 00:06:27.616 "medium_priority_weight": 0, 00:06:27.616 "high_priority_weight": 0, 00:06:27.616 "nvme_adminq_poll_period_us": 10000, 00:06:27.616 "nvme_ioq_poll_period_us": 0, 00:06:27.616 "io_queue_requests": 0, 00:06:27.616 "delay_cmd_submit": true, 00:06:27.616 "transport_retry_count": 4, 00:06:27.616 "bdev_retry_count": 3, 00:06:27.616 "transport_ack_timeout": 0, 00:06:27.616 "ctrlr_loss_timeout_sec": 0, 00:06:27.616 "reconnect_delay_sec": 0, 00:06:27.616 "fast_io_fail_timeout_sec": 0, 00:06:27.616 "disable_auto_failback": false, 00:06:27.616 "generate_uuids": false, 00:06:27.616 "transport_tos": 0, 00:06:27.616 "nvme_error_stat": false, 00:06:27.616 "rdma_srq_size": 0, 00:06:27.616 "io_path_stat": false, 00:06:27.616 "allow_accel_sequence": false, 00:06:27.616 "rdma_max_cq_size": 0, 00:06:27.616 "rdma_cm_event_timeout_ms": 0, 00:06:27.616 "dhchap_digests": [ 00:06:27.616 "sha256", 00:06:27.616 "sha384", 00:06:27.616 "sha512" 00:06:27.616 ], 00:06:27.616 "dhchap_dhgroups": [ 00:06:27.616 "null", 00:06:27.616 "ffdhe2048", 00:06:27.616 "ffdhe3072", 00:06:27.616 "ffdhe4096", 00:06:27.616 "ffdhe6144", 00:06:27.616 "ffdhe8192" 00:06:27.616 ], 00:06:27.616 "rdma_umr_per_io": false 00:06:27.616 } 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "method": "bdev_nvme_set_hotplug", 00:06:27.616 "params": { 00:06:27.616 "period_us": 100000, 00:06:27.616 "enable": false 00:06:27.616 } 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "method": "bdev_wait_for_examine" 00:06:27.616 } 00:06:27.616 ] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "scsi", 00:06:27.616 "config": null 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "scheduler", 00:06:27.616 "config": [ 00:06:27.616 { 00:06:27.616 "method": "framework_set_scheduler", 00:06:27.616 "params": { 00:06:27.616 "name": "static" 00:06:27.616 } 00:06:27.616 } 00:06:27.616 ] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "vhost_scsi", 00:06:27.616 "config": [] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "vhost_blk", 00:06:27.616 "config": [] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "ublk", 00:06:27.616 "config": [] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "nbd", 00:06:27.616 "config": [] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "nvmf", 00:06:27.616 "config": [ 00:06:27.616 { 00:06:27.616 "method": "nvmf_set_config", 00:06:27.616 "params": { 00:06:27.616 "discovery_filter": "match_any", 00:06:27.616 "admin_cmd_passthru": { 00:06:27.616 "identify_ctrlr": false 00:06:27.616 }, 00:06:27.616 "dhchap_digests": [ 00:06:27.616 "sha256", 00:06:27.616 "sha384", 00:06:27.616 "sha512" 00:06:27.616 ], 00:06:27.616 "dhchap_dhgroups": [ 00:06:27.616 "null", 00:06:27.616 "ffdhe2048", 00:06:27.616 "ffdhe3072", 00:06:27.616 "ffdhe4096", 00:06:27.616 "ffdhe6144", 00:06:27.616 "ffdhe8192" 00:06:27.616 ] 00:06:27.616 } 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "method": "nvmf_set_max_subsystems", 00:06:27.616 "params": { 00:06:27.616 "max_subsystems": 1024 00:06:27.616 } 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "method": "nvmf_set_crdt", 00:06:27.616 "params": { 00:06:27.616 "crdt1": 0, 00:06:27.616 "crdt2": 0, 00:06:27.616 "crdt3": 0 00:06:27.616 } 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "method": "nvmf_create_transport", 00:06:27.616 "params": { 00:06:27.616 "trtype": "TCP", 00:06:27.616 "max_queue_depth": 128, 00:06:27.616 "max_io_qpairs_per_ctrlr": 127, 00:06:27.616 "in_capsule_data_size": 4096, 00:06:27.616 "max_io_size": 131072, 00:06:27.616 "io_unit_size": 131072, 00:06:27.616 "max_aq_depth": 128, 00:06:27.616 "num_shared_buffers": 511, 00:06:27.616 "buf_cache_size": 4294967295, 00:06:27.616 "dif_insert_or_strip": false, 00:06:27.616 "zcopy": false, 00:06:27.616 "c2h_success": true, 00:06:27.616 "sock_priority": 0, 00:06:27.616 "abort_timeout_sec": 1, 00:06:27.616 "ack_timeout": 0, 00:06:27.616 "data_wr_pool_size": 0 00:06:27.616 } 00:06:27.616 } 00:06:27.616 ] 00:06:27.616 }, 00:06:27.616 { 00:06:27.616 "subsystem": "iscsi", 00:06:27.616 "config": [ 00:06:27.616 { 00:06:27.616 "method": "iscsi_set_options", 00:06:27.616 "params": { 00:06:27.616 "node_base": "iqn.2016-06.io.spdk", 00:06:27.616 "max_sessions": 128, 00:06:27.616 "max_connections_per_session": 2, 00:06:27.616 "max_queue_depth": 64, 00:06:27.616 "default_time2wait": 2, 00:06:27.616 "default_time2retain": 20, 00:06:27.616 "first_burst_length": 8192, 00:06:27.616 "immediate_data": true, 00:06:27.616 "allow_duplicated_isid": false, 00:06:27.616 "error_recovery_level": 0, 00:06:27.616 "nop_timeout": 60, 00:06:27.616 "nop_in_interval": 30, 00:06:27.616 "disable_chap": false, 00:06:27.616 "require_chap": false, 00:06:27.616 "mutual_chap": false, 00:06:27.616 "chap_group": 0, 00:06:27.616 "max_large_datain_per_connection": 64, 00:06:27.616 "max_r2t_per_connection": 4, 00:06:27.616 "pdu_pool_size": 36864, 00:06:27.616 "immediate_data_pool_size": 16384, 00:06:27.616 "data_out_pool_size": 2048 00:06:27.616 } 00:06:27.616 } 00:06:27.616 ] 00:06:27.616 } 00:06:27.616 ] 00:06:27.616 } 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71035 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71035 ']' 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71035 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71035 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.616 killing process with pid 71035 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71035' 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71035 00:06:27.616 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71035 00:06:27.875 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71055 00:06:27.875 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.875 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71055 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71055 ']' 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71055 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.146 14:22:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71055 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.146 killing process with pid 71055 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71055' 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71055 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71055 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.146 00:06:33.146 real 0m6.055s 00:06:33.146 user 0m5.800s 00:06:33.146 sys 0m0.403s 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.146 ************************************ 00:06:33.146 END TEST skip_rpc_with_json 00:06:33.146 ************************************ 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.146 14:22:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:33.146 14:22:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.146 14:22:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.146 14:22:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.146 ************************************ 00:06:33.146 START TEST skip_rpc_with_delay 00:06:33.146 ************************************ 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:33.146 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.146 [2024-12-16 14:22:25.332831] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.404 00:06:33.404 real 0m0.088s 00:06:33.404 user 0m0.060s 00:06:33.404 sys 0m0.027s 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.404 ************************************ 00:06:33.404 END TEST skip_rpc_with_delay 00:06:33.404 ************************************ 00:06:33.404 14:22:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:33.404 14:22:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:33.404 14:22:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:33.404 14:22:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:33.404 14:22:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.404 14:22:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.404 14:22:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.404 ************************************ 00:06:33.404 START TEST exit_on_failed_rpc_init 00:06:33.404 ************************************ 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71159 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71159 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71159 ']' 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.404 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.404 [2024-12-16 14:22:25.476897] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:33.404 [2024-12-16 14:22:25.476987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71159 ] 00:06:33.663 [2024-12-16 14:22:25.624422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.663 [2024-12-16 14:22:25.643362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.663 [2024-12-16 14:22:25.676857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:33.663 14:22:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.922 [2024-12-16 14:22:25.866451] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:33.922 [2024-12-16 14:22:25.866547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:06:33.922 [2024-12-16 14:22:26.025407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.922 [2024-12-16 14:22:26.049229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.922 [2024-12-16 14:22:26.049342] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:33.922 [2024-12-16 14:22:26.049358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:33.922 [2024-12-16 14:22:26.049368] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71159 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71159 ']' 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71159 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.922 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71159 00:06:34.181 killing process with pid 71159 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71159' 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71159 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71159 00:06:34.181 00:06:34.181 real 0m0.926s 00:06:34.181 user 0m1.057s 00:06:34.181 sys 0m0.274s 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.181 ************************************ 00:06:34.181 END TEST exit_on_failed_rpc_init 00:06:34.181 ************************************ 00:06:34.181 14:22:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.181 14:22:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:34.440 ************************************ 00:06:34.440 END TEST skip_rpc 00:06:34.440 ************************************ 00:06:34.440 00:06:34.440 real 0m12.716s 00:06:34.440 user 0m12.094s 00:06:34.440 sys 0m1.077s 00:06:34.440 14:22:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.440 14:22:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.440 14:22:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:34.440 14:22:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.440 14:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.440 14:22:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.440 ************************************ 00:06:34.440 START TEST rpc_client 00:06:34.440 ************************************ 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:34.440 * Looking for test storage... 00:06:34.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.440 14:22:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.440 --rc genhtml_branch_coverage=1 00:06:34.440 --rc genhtml_function_coverage=1 00:06:34.440 --rc genhtml_legend=1 00:06:34.440 --rc geninfo_all_blocks=1 00:06:34.440 --rc geninfo_unexecuted_blocks=1 00:06:34.440 00:06:34.440 ' 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.440 --rc genhtml_branch_coverage=1 00:06:34.440 --rc genhtml_function_coverage=1 00:06:34.440 --rc genhtml_legend=1 00:06:34.440 --rc geninfo_all_blocks=1 00:06:34.440 --rc geninfo_unexecuted_blocks=1 00:06:34.440 00:06:34.440 ' 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.440 --rc genhtml_branch_coverage=1 00:06:34.440 --rc genhtml_function_coverage=1 00:06:34.440 --rc genhtml_legend=1 00:06:34.440 --rc geninfo_all_blocks=1 00:06:34.440 --rc geninfo_unexecuted_blocks=1 00:06:34.440 00:06:34.440 ' 00:06:34.440 14:22:26 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.440 --rc genhtml_branch_coverage=1 00:06:34.440 --rc genhtml_function_coverage=1 00:06:34.440 --rc genhtml_legend=1 00:06:34.440 --rc geninfo_all_blocks=1 00:06:34.440 --rc geninfo_unexecuted_blocks=1 00:06:34.440 00:06:34.440 ' 00:06:34.440 14:22:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:34.440 OK 00:06:34.700 14:22:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:34.700 00:06:34.700 real 0m0.210s 00:06:34.700 user 0m0.125s 00:06:34.700 sys 0m0.093s 00:06:34.700 ************************************ 00:06:34.700 END TEST rpc_client 00:06:34.700 ************************************ 00:06:34.700 14:22:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.700 14:22:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:34.700 14:22:26 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:34.700 14:22:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.700 14:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.700 14:22:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.700 ************************************ 00:06:34.700 START TEST json_config 00:06:34.700 ************************************ 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.700 14:22:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.700 14:22:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.700 14:22:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.700 14:22:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.700 14:22:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.700 14:22:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:34.700 14:22:26 json_config -- scripts/common.sh@345 -- # : 1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.700 14:22:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.700 14:22:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@353 -- # local d=1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.700 14:22:26 json_config -- scripts/common.sh@355 -- # echo 1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.700 14:22:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@353 -- # local d=2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.700 14:22:26 json_config -- scripts/common.sh@355 -- # echo 2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.700 14:22:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.700 14:22:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.700 14:22:26 json_config -- scripts/common.sh@368 -- # return 0 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.700 --rc genhtml_branch_coverage=1 00:06:34.700 --rc genhtml_function_coverage=1 00:06:34.700 --rc genhtml_legend=1 00:06:34.700 --rc geninfo_all_blocks=1 00:06:34.700 --rc geninfo_unexecuted_blocks=1 00:06:34.700 00:06:34.700 ' 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.700 --rc genhtml_branch_coverage=1 00:06:34.700 --rc genhtml_function_coverage=1 00:06:34.700 --rc genhtml_legend=1 00:06:34.700 --rc geninfo_all_blocks=1 00:06:34.700 --rc geninfo_unexecuted_blocks=1 00:06:34.700 00:06:34.700 ' 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.700 --rc genhtml_branch_coverage=1 00:06:34.700 --rc genhtml_function_coverage=1 00:06:34.700 --rc genhtml_legend=1 00:06:34.700 --rc geninfo_all_blocks=1 00:06:34.700 --rc geninfo_unexecuted_blocks=1 00:06:34.700 00:06:34.700 ' 00:06:34.700 14:22:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.700 --rc genhtml_branch_coverage=1 00:06:34.700 --rc genhtml_function_coverage=1 00:06:34.700 --rc genhtml_legend=1 00:06:34.700 --rc geninfo_all_blocks=1 00:06:34.700 --rc geninfo_unexecuted_blocks=1 00:06:34.700 00:06:34.700 ' 00:06:34.700 14:22:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:34.700 14:22:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:34.700 14:22:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.700 14:22:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.700 14:22:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.700 14:22:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:34.701 14:22:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.701 14:22:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.701 14:22:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.701 14:22:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.701 14:22:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.701 14:22:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.701 14:22:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.701 14:22:26 json_config -- paths/export.sh@5 -- # export PATH 00:06:34.701 14:22:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@51 -- # : 0 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.701 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.701 14:22:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:34.701 INFO: JSON configuration test init 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:34.701 14:22:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.701 14:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:34.701 14:22:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.701 14:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.701 14:22:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:34.701 14:22:26 json_config -- json_config/common.sh@9 -- # local app=target 00:06:34.701 14:22:26 json_config -- json_config/common.sh@10 -- # shift 00:06:34.960 14:22:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.960 14:22:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.960 14:22:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.960 14:22:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.960 14:22:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.960 14:22:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71303 00:06:34.960 14:22:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.960 Waiting for target to run... 00:06:34.960 14:22:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:34.960 14:22:26 json_config -- json_config/common.sh@25 -- # waitforlisten 71303 /var/tmp/spdk_tgt.sock 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 71303 ']' 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.960 14:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.960 [2024-12-16 14:22:26.953078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:34.960 [2024-12-16 14:22:26.953320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71303 ] 00:06:35.219 [2024-12-16 14:22:27.237530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.219 [2024-12-16 14:22:27.249724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.787 00:06:35.787 14:22:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.787 14:22:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:35.787 14:22:27 json_config -- json_config/common.sh@26 -- # echo '' 00:06:35.787 14:22:27 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:35.787 14:22:27 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:35.787 14:22:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.787 14:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.046 14:22:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:36.046 14:22:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:36.046 14:22:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.046 14:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.046 14:22:28 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:36.046 14:22:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:36.046 14:22:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:36.306 [2024-12-16 14:22:28.355938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:36.590 14:22:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.590 14:22:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:36.590 14:22:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:36.590 14:22:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:36.854 14:22:28 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@54 -- # sort 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:36.855 14:22:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.855 14:22:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:36.855 14:22:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.855 14:22:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:36.855 14:22:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:36.855 14:22:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.113 MallocForNvmf0 00:06:37.113 14:22:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:37.113 14:22:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:37.372 MallocForNvmf1 00:06:37.372 14:22:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:37.372 14:22:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:37.630 [2024-12-16 14:22:29.693542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.630 14:22:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:37.630 14:22:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:37.889 14:22:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:37.889 14:22:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:38.148 14:22:30 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:38.148 14:22:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:38.407 14:22:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:38.407 14:22:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:38.666 [2024-12-16 14:22:30.698094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:38.666 14:22:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:38.666 14:22:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.666 14:22:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 14:22:30 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:38.666 14:22:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.666 14:22:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 14:22:30 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:38.666 14:22:30 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:38.666 14:22:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:38.925 MallocBdevForConfigChangeCheck 00:06:38.925 14:22:31 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:38.925 14:22:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.925 14:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 14:22:31 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:38.925 14:22:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.492 14:22:31 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:39.492 INFO: shutting down applications... 00:06:39.492 14:22:31 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:39.492 14:22:31 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:39.492 14:22:31 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:39.492 14:22:31 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:39.751 Calling clear_iscsi_subsystem 00:06:39.751 Calling clear_nvmf_subsystem 00:06:39.751 Calling clear_nbd_subsystem 00:06:39.751 Calling clear_ublk_subsystem 00:06:39.751 Calling clear_vhost_blk_subsystem 00:06:39.751 Calling clear_vhost_scsi_subsystem 00:06:39.751 Calling clear_bdev_subsystem 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:39.751 14:22:31 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:40.010 14:22:32 json_config -- json_config/json_config.sh@352 -- # break 00:06:40.010 14:22:32 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:40.011 14:22:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:40.011 14:22:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:40.011 14:22:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:40.011 14:22:32 json_config -- json_config/common.sh@35 -- # [[ -n 71303 ]] 00:06:40.011 14:22:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71303 00:06:40.011 14:22:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:40.011 14:22:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.011 14:22:32 json_config -- json_config/common.sh@41 -- # kill -0 71303 00:06:40.011 14:22:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:40.578 14:22:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:40.578 14:22:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.578 14:22:32 json_config -- json_config/common.sh@41 -- # kill -0 71303 00:06:40.578 14:22:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:40.578 SPDK target shutdown done 00:06:40.578 INFO: relaunching applications... 00:06:40.578 14:22:32 json_config -- json_config/common.sh@43 -- # break 00:06:40.578 14:22:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:40.578 14:22:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:40.578 14:22:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:40.578 14:22:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.578 14:22:32 json_config -- json_config/common.sh@9 -- # local app=target 00:06:40.578 14:22:32 json_config -- json_config/common.sh@10 -- # shift 00:06:40.578 14:22:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.578 14:22:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.578 14:22:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.578 14:22:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.578 14:22:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.578 Waiting for target to run... 00:06:40.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.578 14:22:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71503 00:06:40.579 14:22:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.579 14:22:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.579 14:22:32 json_config -- json_config/common.sh@25 -- # waitforlisten 71503 /var/tmp/spdk_tgt.sock 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 71503 ']' 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.579 14:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.579 [2024-12-16 14:22:32.745841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:40.579 [2024-12-16 14:22:32.746084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71503 ] 00:06:40.837 [2024-12-16 14:22:33.025088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.096 [2024-12-16 14:22:33.040463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.096 [2024-12-16 14:22:33.169401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.355 [2024-12-16 14:22:33.358835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.355 [2024-12-16 14:22:33.390852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.614 00:06:41.614 INFO: Checking if target configuration is the same... 00:06:41.614 14:22:33 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.614 14:22:33 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:41.614 14:22:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:41.614 14:22:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:41.614 14:22:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:41.614 14:22:33 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.614 14:22:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:41.614 14:22:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.614 + '[' 2 -ne 2 ']' 00:06:41.614 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:41.614 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:41.614 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:41.614 +++ basename /dev/fd/62 00:06:41.614 ++ mktemp /tmp/62.XXX 00:06:41.614 + tmp_file_1=/tmp/62.10h 00:06:41.614 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.614 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:41.614 + tmp_file_2=/tmp/spdk_tgt_config.json.igV 00:06:41.614 + ret=0 00:06:41.614 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:42.181 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:42.181 + diff -u /tmp/62.10h /tmp/spdk_tgt_config.json.igV 00:06:42.181 INFO: JSON config files are the same 00:06:42.181 + echo 'INFO: JSON config files are the same' 00:06:42.181 + rm /tmp/62.10h /tmp/spdk_tgt_config.json.igV 00:06:42.181 + exit 0 00:06:42.181 INFO: changing configuration and checking if this can be detected... 00:06:42.181 14:22:34 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:42.181 14:22:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:42.181 14:22:34 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.181 14:22:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.440 14:22:34 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.440 14:22:34 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:42.440 14:22:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.440 + '[' 2 -ne 2 ']' 00:06:42.440 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:42.440 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:42.440 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:42.440 +++ basename /dev/fd/62 00:06:42.440 ++ mktemp /tmp/62.XXX 00:06:42.440 + tmp_file_1=/tmp/62.xgP 00:06:42.440 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.440 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.440 + tmp_file_2=/tmp/spdk_tgt_config.json.bG1 00:06:42.440 + ret=0 00:06:42.440 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.022 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.022 + diff -u /tmp/62.xgP /tmp/spdk_tgt_config.json.bG1 00:06:43.022 + ret=1 00:06:43.022 + echo '=== Start of file: /tmp/62.xgP ===' 00:06:43.022 + cat /tmp/62.xgP 00:06:43.022 + echo '=== End of file: /tmp/62.xgP ===' 00:06:43.022 + echo '' 00:06:43.022 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bG1 ===' 00:06:43.022 + cat /tmp/spdk_tgt_config.json.bG1 00:06:43.022 + echo '=== End of file: /tmp/spdk_tgt_config.json.bG1 ===' 00:06:43.022 + echo '' 00:06:43.022 + rm /tmp/62.xgP /tmp/spdk_tgt_config.json.bG1 00:06:43.022 + exit 1 00:06:43.022 INFO: configuration change detected. 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:43.022 14:22:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.022 14:22:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 71503 ]] 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:43.022 14:22:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:43.022 14:22:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.022 14:22:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.022 14:22:35 json_config -- json_config/json_config.sh@330 -- # killprocess 71503 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@954 -- # '[' -z 71503 ']' 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@958 -- # kill -0 71503 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@959 -- # uname 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71503 00:06:43.022 killing process with pid 71503 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71503' 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@973 -- # kill 71503 00:06:43.022 14:22:35 json_config -- common/autotest_common.sh@978 -- # wait 71503 00:06:43.282 14:22:35 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.282 14:22:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:43.282 14:22:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.282 14:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.282 INFO: Success 00:06:43.282 14:22:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:43.282 14:22:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:43.282 ************************************ 00:06:43.282 END TEST json_config 00:06:43.282 ************************************ 00:06:43.282 00:06:43.282 real 0m8.573s 00:06:43.282 user 0m12.561s 00:06:43.282 sys 0m1.427s 00:06:43.282 14:22:35 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.282 14:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.282 14:22:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.282 14:22:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.282 14:22:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.282 14:22:35 -- common/autotest_common.sh@10 -- # set +x 00:06:43.282 ************************************ 00:06:43.282 START TEST json_config_extra_key 00:06:43.282 ************************************ 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.282 14:22:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.282 --rc genhtml_branch_coverage=1 00:06:43.282 --rc genhtml_function_coverage=1 00:06:43.282 --rc genhtml_legend=1 00:06:43.282 --rc geninfo_all_blocks=1 00:06:43.282 --rc geninfo_unexecuted_blocks=1 00:06:43.282 00:06:43.282 ' 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.282 --rc genhtml_branch_coverage=1 00:06:43.282 --rc genhtml_function_coverage=1 00:06:43.282 --rc genhtml_legend=1 00:06:43.282 --rc geninfo_all_blocks=1 00:06:43.282 --rc geninfo_unexecuted_blocks=1 00:06:43.282 00:06:43.282 ' 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.282 --rc genhtml_branch_coverage=1 00:06:43.282 --rc genhtml_function_coverage=1 00:06:43.282 --rc genhtml_legend=1 00:06:43.282 --rc geninfo_all_blocks=1 00:06:43.282 --rc geninfo_unexecuted_blocks=1 00:06:43.282 00:06:43.282 ' 00:06:43.282 14:22:35 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.282 --rc genhtml_branch_coverage=1 00:06:43.282 --rc genhtml_function_coverage=1 00:06:43.282 --rc genhtml_legend=1 00:06:43.282 --rc geninfo_all_blocks=1 00:06:43.282 --rc geninfo_unexecuted_blocks=1 00:06:43.282 00:06:43.282 ' 00:06:43.282 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.282 14:22:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.542 14:22:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.542 14:22:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.542 14:22:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.542 14:22:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.542 14:22:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.542 14:22:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.542 14:22:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.542 14:22:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:43.542 14:22:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.542 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.542 14:22:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:43.542 INFO: launching applications... 00:06:43.542 Waiting for target to run... 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:43.542 14:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71653 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71653 /var/tmp/spdk_tgt.sock 00:06:43.542 14:22:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71653 ']' 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.542 14:22:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.542 [2024-12-16 14:22:35.566394] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:43.542 [2024-12-16 14:22:35.566696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71653 ] 00:06:43.801 [2024-12-16 14:22:35.867488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.801 [2024-12-16 14:22:35.880156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.801 [2024-12-16 14:22:35.902592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.737 14:22:36 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.738 14:22:36 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:44.738 00:06:44.738 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:44.738 INFO: shutting down applications... 00:06:44.738 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71653 ]] 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71653 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71653 00:06:44.738 14:22:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71653 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.996 SPDK target shutdown done 00:06:44.996 Success 00:06:44.996 14:22:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.996 14:22:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:44.996 00:06:44.996 real 0m1.789s 00:06:44.996 user 0m1.653s 00:06:44.996 sys 0m0.310s 00:06:44.996 ************************************ 00:06:44.996 END TEST json_config_extra_key 00:06:44.996 ************************************ 00:06:44.996 14:22:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.996 14:22:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.996 14:22:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.996 14:22:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.996 14:22:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.996 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:06:44.996 ************************************ 00:06:44.996 START TEST alias_rpc 00:06:44.996 ************************************ 00:06:44.997 14:22:37 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.257 * Looking for test storage... 00:06:45.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:45.257 14:22:37 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.257 14:22:37 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.257 14:22:37 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.257 14:22:37 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:45.257 14:22:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.258 14:22:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.258 --rc genhtml_branch_coverage=1 00:06:45.258 --rc genhtml_function_coverage=1 00:06:45.258 --rc genhtml_legend=1 00:06:45.258 --rc geninfo_all_blocks=1 00:06:45.258 --rc geninfo_unexecuted_blocks=1 00:06:45.258 00:06:45.258 ' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.258 --rc genhtml_branch_coverage=1 00:06:45.258 --rc genhtml_function_coverage=1 00:06:45.258 --rc genhtml_legend=1 00:06:45.258 --rc geninfo_all_blocks=1 00:06:45.258 --rc geninfo_unexecuted_blocks=1 00:06:45.258 00:06:45.258 ' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.258 --rc genhtml_branch_coverage=1 00:06:45.258 --rc genhtml_function_coverage=1 00:06:45.258 --rc genhtml_legend=1 00:06:45.258 --rc geninfo_all_blocks=1 00:06:45.258 --rc geninfo_unexecuted_blocks=1 00:06:45.258 00:06:45.258 ' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.258 --rc genhtml_branch_coverage=1 00:06:45.258 --rc genhtml_function_coverage=1 00:06:45.258 --rc genhtml_legend=1 00:06:45.258 --rc geninfo_all_blocks=1 00:06:45.258 --rc geninfo_unexecuted_blocks=1 00:06:45.258 00:06:45.258 ' 00:06:45.258 14:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.258 14:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71731 00:06:45.258 14:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71731 00:06:45.258 14:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71731 ']' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.258 14:22:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.258 [2024-12-16 14:22:37.407358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:45.258 [2024-12-16 14:22:37.407491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71731 ] 00:06:45.517 [2024-12-16 14:22:37.552746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.517 [2024-12-16 14:22:37.571799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.517 [2024-12-16 14:22:37.605535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.775 14:22:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.775 14:22:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.775 14:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:46.034 14:22:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71731 00:06:46.034 14:22:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71731 ']' 00:06:46.034 14:22:38 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71731 00:06:46.034 14:22:38 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:46.034 14:22:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.034 14:22:38 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71731 00:06:46.035 14:22:38 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.035 killing process with pid 71731 00:06:46.035 14:22:38 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.035 14:22:38 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71731' 00:06:46.035 14:22:38 alias_rpc -- common/autotest_common.sh@973 -- # kill 71731 00:06:46.035 14:22:38 alias_rpc -- common/autotest_common.sh@978 -- # wait 71731 00:06:46.294 ************************************ 00:06:46.294 END TEST alias_rpc 00:06:46.294 ************************************ 00:06:46.294 00:06:46.294 real 0m1.101s 00:06:46.294 user 0m1.334s 00:06:46.294 sys 0m0.287s 00:06:46.294 14:22:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.294 14:22:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.294 14:22:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:46.294 14:22:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.294 14:22:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.294 14:22:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.294 14:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:46.294 ************************************ 00:06:46.294 START TEST spdkcli_tcp 00:06:46.295 ************************************ 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:46.295 * Looking for test storage... 00:06:46.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.295 14:22:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.295 --rc genhtml_branch_coverage=1 00:06:46.295 --rc genhtml_function_coverage=1 00:06:46.295 --rc genhtml_legend=1 00:06:46.295 --rc geninfo_all_blocks=1 00:06:46.295 --rc geninfo_unexecuted_blocks=1 00:06:46.295 00:06:46.295 ' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.295 --rc genhtml_branch_coverage=1 00:06:46.295 --rc genhtml_function_coverage=1 00:06:46.295 --rc genhtml_legend=1 00:06:46.295 --rc geninfo_all_blocks=1 00:06:46.295 --rc geninfo_unexecuted_blocks=1 00:06:46.295 00:06:46.295 ' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.295 --rc genhtml_branch_coverage=1 00:06:46.295 --rc genhtml_function_coverage=1 00:06:46.295 --rc genhtml_legend=1 00:06:46.295 --rc geninfo_all_blocks=1 00:06:46.295 --rc geninfo_unexecuted_blocks=1 00:06:46.295 00:06:46.295 ' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.295 --rc genhtml_branch_coverage=1 00:06:46.295 --rc genhtml_function_coverage=1 00:06:46.295 --rc genhtml_legend=1 00:06:46.295 --rc geninfo_all_blocks=1 00:06:46.295 --rc geninfo_unexecuted_blocks=1 00:06:46.295 00:06:46.295 ' 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71802 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71802 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71802 ']' 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.295 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.295 14:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.554 [2024-12-16 14:22:38.541755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:46.554 [2024-12-16 14:22:38.541871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:06:46.554 [2024-12-16 14:22:38.685134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.554 [2024-12-16 14:22:38.704405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.554 [2024-12-16 14:22:38.704411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.554 [2024-12-16 14:22:38.738145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.813 14:22:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.813 14:22:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:46.813 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71806 00:06:46.813 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:46.813 14:22:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:47.073 [ 00:06:47.073 "bdev_malloc_delete", 00:06:47.073 "bdev_malloc_create", 00:06:47.073 "bdev_null_resize", 00:06:47.073 "bdev_null_delete", 00:06:47.073 "bdev_null_create", 00:06:47.073 "bdev_nvme_cuse_unregister", 00:06:47.073 "bdev_nvme_cuse_register", 00:06:47.073 "bdev_opal_new_user", 00:06:47.073 "bdev_opal_set_lock_state", 00:06:47.073 "bdev_opal_delete", 00:06:47.073 "bdev_opal_get_info", 00:06:47.073 "bdev_opal_create", 00:06:47.073 "bdev_nvme_opal_revert", 00:06:47.073 "bdev_nvme_opal_init", 00:06:47.073 "bdev_nvme_send_cmd", 00:06:47.073 "bdev_nvme_set_keys", 00:06:47.073 "bdev_nvme_get_path_iostat", 00:06:47.073 "bdev_nvme_get_mdns_discovery_info", 00:06:47.073 "bdev_nvme_stop_mdns_discovery", 00:06:47.073 "bdev_nvme_start_mdns_discovery", 00:06:47.073 "bdev_nvme_set_multipath_policy", 00:06:47.073 "bdev_nvme_set_preferred_path", 00:06:47.073 "bdev_nvme_get_io_paths", 00:06:47.073 "bdev_nvme_remove_error_injection", 00:06:47.073 "bdev_nvme_add_error_injection", 00:06:47.073 "bdev_nvme_get_discovery_info", 00:06:47.073 "bdev_nvme_stop_discovery", 00:06:47.073 "bdev_nvme_start_discovery", 00:06:47.073 "bdev_nvme_get_controller_health_info", 00:06:47.073 "bdev_nvme_disable_controller", 00:06:47.073 "bdev_nvme_enable_controller", 00:06:47.073 "bdev_nvme_reset_controller", 00:06:47.073 "bdev_nvme_get_transport_statistics", 00:06:47.073 "bdev_nvme_apply_firmware", 00:06:47.073 "bdev_nvme_detach_controller", 00:06:47.073 "bdev_nvme_get_controllers", 00:06:47.073 "bdev_nvme_attach_controller", 00:06:47.073 "bdev_nvme_set_hotplug", 00:06:47.073 "bdev_nvme_set_options", 00:06:47.073 "bdev_passthru_delete", 00:06:47.073 "bdev_passthru_create", 00:06:47.073 "bdev_lvol_set_parent_bdev", 00:06:47.073 "bdev_lvol_set_parent", 00:06:47.073 "bdev_lvol_check_shallow_copy", 00:06:47.073 "bdev_lvol_start_shallow_copy", 00:06:47.073 "bdev_lvol_grow_lvstore", 00:06:47.073 "bdev_lvol_get_lvols", 00:06:47.073 "bdev_lvol_get_lvstores", 00:06:47.073 "bdev_lvol_delete", 00:06:47.073 "bdev_lvol_set_read_only", 00:06:47.073 "bdev_lvol_resize", 00:06:47.073 "bdev_lvol_decouple_parent", 00:06:47.073 "bdev_lvol_inflate", 00:06:47.073 "bdev_lvol_rename", 00:06:47.073 "bdev_lvol_clone_bdev", 00:06:47.073 "bdev_lvol_clone", 00:06:47.073 "bdev_lvol_snapshot", 00:06:47.073 "bdev_lvol_create", 00:06:47.073 "bdev_lvol_delete_lvstore", 00:06:47.073 "bdev_lvol_rename_lvstore", 00:06:47.073 "bdev_lvol_create_lvstore", 00:06:47.073 "bdev_raid_set_options", 00:06:47.073 "bdev_raid_remove_base_bdev", 00:06:47.073 "bdev_raid_add_base_bdev", 00:06:47.073 "bdev_raid_delete", 00:06:47.073 "bdev_raid_create", 00:06:47.073 "bdev_raid_get_bdevs", 00:06:47.073 "bdev_error_inject_error", 00:06:47.073 "bdev_error_delete", 00:06:47.073 "bdev_error_create", 00:06:47.073 "bdev_split_delete", 00:06:47.073 "bdev_split_create", 00:06:47.073 "bdev_delay_delete", 00:06:47.073 "bdev_delay_create", 00:06:47.073 "bdev_delay_update_latency", 00:06:47.073 "bdev_zone_block_delete", 00:06:47.073 "bdev_zone_block_create", 00:06:47.073 "blobfs_create", 00:06:47.073 "blobfs_detect", 00:06:47.073 "blobfs_set_cache_size", 00:06:47.073 "bdev_aio_delete", 00:06:47.073 "bdev_aio_rescan", 00:06:47.073 "bdev_aio_create", 00:06:47.073 "bdev_ftl_set_property", 00:06:47.073 "bdev_ftl_get_properties", 00:06:47.073 "bdev_ftl_get_stats", 00:06:47.073 "bdev_ftl_unmap", 00:06:47.073 "bdev_ftl_unload", 00:06:47.073 "bdev_ftl_delete", 00:06:47.073 "bdev_ftl_load", 00:06:47.073 "bdev_ftl_create", 00:06:47.073 "bdev_virtio_attach_controller", 00:06:47.073 "bdev_virtio_scsi_get_devices", 00:06:47.073 "bdev_virtio_detach_controller", 00:06:47.073 "bdev_virtio_blk_set_hotplug", 00:06:47.073 "bdev_iscsi_delete", 00:06:47.073 "bdev_iscsi_create", 00:06:47.073 "bdev_iscsi_set_options", 00:06:47.073 "bdev_uring_delete", 00:06:47.073 "bdev_uring_rescan", 00:06:47.073 "bdev_uring_create", 00:06:47.073 "accel_error_inject_error", 00:06:47.073 "ioat_scan_accel_module", 00:06:47.073 "dsa_scan_accel_module", 00:06:47.073 "iaa_scan_accel_module", 00:06:47.073 "keyring_file_remove_key", 00:06:47.073 "keyring_file_add_key", 00:06:47.073 "keyring_linux_set_options", 00:06:47.073 "fsdev_aio_delete", 00:06:47.073 "fsdev_aio_create", 00:06:47.073 "iscsi_get_histogram", 00:06:47.073 "iscsi_enable_histogram", 00:06:47.073 "iscsi_set_options", 00:06:47.073 "iscsi_get_auth_groups", 00:06:47.073 "iscsi_auth_group_remove_secret", 00:06:47.073 "iscsi_auth_group_add_secret", 00:06:47.073 "iscsi_delete_auth_group", 00:06:47.073 "iscsi_create_auth_group", 00:06:47.073 "iscsi_set_discovery_auth", 00:06:47.073 "iscsi_get_options", 00:06:47.073 "iscsi_target_node_request_logout", 00:06:47.073 "iscsi_target_node_set_redirect", 00:06:47.073 "iscsi_target_node_set_auth", 00:06:47.073 "iscsi_target_node_add_lun", 00:06:47.073 "iscsi_get_stats", 00:06:47.073 "iscsi_get_connections", 00:06:47.073 "iscsi_portal_group_set_auth", 00:06:47.073 "iscsi_start_portal_group", 00:06:47.073 "iscsi_delete_portal_group", 00:06:47.073 "iscsi_create_portal_group", 00:06:47.073 "iscsi_get_portal_groups", 00:06:47.073 "iscsi_delete_target_node", 00:06:47.073 "iscsi_target_node_remove_pg_ig_maps", 00:06:47.073 "iscsi_target_node_add_pg_ig_maps", 00:06:47.073 "iscsi_create_target_node", 00:06:47.073 "iscsi_get_target_nodes", 00:06:47.073 "iscsi_delete_initiator_group", 00:06:47.073 "iscsi_initiator_group_remove_initiators", 00:06:47.073 "iscsi_initiator_group_add_initiators", 00:06:47.073 "iscsi_create_initiator_group", 00:06:47.073 "iscsi_get_initiator_groups", 00:06:47.073 "nvmf_set_crdt", 00:06:47.073 "nvmf_set_config", 00:06:47.073 "nvmf_set_max_subsystems", 00:06:47.073 "nvmf_stop_mdns_prr", 00:06:47.073 "nvmf_publish_mdns_prr", 00:06:47.073 "nvmf_subsystem_get_listeners", 00:06:47.073 "nvmf_subsystem_get_qpairs", 00:06:47.073 "nvmf_subsystem_get_controllers", 00:06:47.073 "nvmf_get_stats", 00:06:47.073 "nvmf_get_transports", 00:06:47.073 "nvmf_create_transport", 00:06:47.073 "nvmf_get_targets", 00:06:47.073 "nvmf_delete_target", 00:06:47.073 "nvmf_create_target", 00:06:47.073 "nvmf_subsystem_allow_any_host", 00:06:47.073 "nvmf_subsystem_set_keys", 00:06:47.073 "nvmf_subsystem_remove_host", 00:06:47.073 "nvmf_subsystem_add_host", 00:06:47.073 "nvmf_ns_remove_host", 00:06:47.073 "nvmf_ns_add_host", 00:06:47.073 "nvmf_subsystem_remove_ns", 00:06:47.073 "nvmf_subsystem_set_ns_ana_group", 00:06:47.073 "nvmf_subsystem_add_ns", 00:06:47.073 "nvmf_subsystem_listener_set_ana_state", 00:06:47.073 "nvmf_discovery_get_referrals", 00:06:47.073 "nvmf_discovery_remove_referral", 00:06:47.073 "nvmf_discovery_add_referral", 00:06:47.073 "nvmf_subsystem_remove_listener", 00:06:47.073 "nvmf_subsystem_add_listener", 00:06:47.073 "nvmf_delete_subsystem", 00:06:47.073 "nvmf_create_subsystem", 00:06:47.073 "nvmf_get_subsystems", 00:06:47.073 "env_dpdk_get_mem_stats", 00:06:47.073 "nbd_get_disks", 00:06:47.073 "nbd_stop_disk", 00:06:47.073 "nbd_start_disk", 00:06:47.073 "ublk_recover_disk", 00:06:47.073 "ublk_get_disks", 00:06:47.073 "ublk_stop_disk", 00:06:47.073 "ublk_start_disk", 00:06:47.073 "ublk_destroy_target", 00:06:47.073 "ublk_create_target", 00:06:47.073 "virtio_blk_create_transport", 00:06:47.073 "virtio_blk_get_transports", 00:06:47.073 "vhost_controller_set_coalescing", 00:06:47.073 "vhost_get_controllers", 00:06:47.073 "vhost_delete_controller", 00:06:47.073 "vhost_create_blk_controller", 00:06:47.073 "vhost_scsi_controller_remove_target", 00:06:47.073 "vhost_scsi_controller_add_target", 00:06:47.073 "vhost_start_scsi_controller", 00:06:47.073 "vhost_create_scsi_controller", 00:06:47.073 "thread_set_cpumask", 00:06:47.073 "scheduler_set_options", 00:06:47.073 "framework_get_governor", 00:06:47.073 "framework_get_scheduler", 00:06:47.073 "framework_set_scheduler", 00:06:47.073 "framework_get_reactors", 00:06:47.073 "thread_get_io_channels", 00:06:47.073 "thread_get_pollers", 00:06:47.073 "thread_get_stats", 00:06:47.073 "framework_monitor_context_switch", 00:06:47.073 "spdk_kill_instance", 00:06:47.073 "log_enable_timestamps", 00:06:47.073 "log_get_flags", 00:06:47.073 "log_clear_flag", 00:06:47.073 "log_set_flag", 00:06:47.073 "log_get_level", 00:06:47.073 "log_set_level", 00:06:47.073 "log_get_print_level", 00:06:47.073 "log_set_print_level", 00:06:47.073 "framework_enable_cpumask_locks", 00:06:47.073 "framework_disable_cpumask_locks", 00:06:47.073 "framework_wait_init", 00:06:47.073 "framework_start_init", 00:06:47.073 "scsi_get_devices", 00:06:47.073 "bdev_get_histogram", 00:06:47.073 "bdev_enable_histogram", 00:06:47.073 "bdev_set_qos_limit", 00:06:47.073 "bdev_set_qd_sampling_period", 00:06:47.073 "bdev_get_bdevs", 00:06:47.073 "bdev_reset_iostat", 00:06:47.073 "bdev_get_iostat", 00:06:47.073 "bdev_examine", 00:06:47.073 "bdev_wait_for_examine", 00:06:47.073 "bdev_set_options", 00:06:47.074 "accel_get_stats", 00:06:47.074 "accel_set_options", 00:06:47.074 "accel_set_driver", 00:06:47.074 "accel_crypto_key_destroy", 00:06:47.074 "accel_crypto_keys_get", 00:06:47.074 "accel_crypto_key_create", 00:06:47.074 "accel_assign_opc", 00:06:47.074 "accel_get_module_info", 00:06:47.074 "accel_get_opc_assignments", 00:06:47.074 "vmd_rescan", 00:06:47.074 "vmd_remove_device", 00:06:47.074 "vmd_enable", 00:06:47.074 "sock_get_default_impl", 00:06:47.074 "sock_set_default_impl", 00:06:47.074 "sock_impl_set_options", 00:06:47.074 "sock_impl_get_options", 00:06:47.074 "iobuf_get_stats", 00:06:47.074 "iobuf_set_options", 00:06:47.074 "keyring_get_keys", 00:06:47.074 "framework_get_pci_devices", 00:06:47.074 "framework_get_config", 00:06:47.074 "framework_get_subsystems", 00:06:47.074 "fsdev_set_opts", 00:06:47.074 "fsdev_get_opts", 00:06:47.074 "trace_get_info", 00:06:47.074 "trace_get_tpoint_group_mask", 00:06:47.074 "trace_disable_tpoint_group", 00:06:47.074 "trace_enable_tpoint_group", 00:06:47.074 "trace_clear_tpoint_mask", 00:06:47.074 "trace_set_tpoint_mask", 00:06:47.074 "notify_get_notifications", 00:06:47.074 "notify_get_types", 00:06:47.074 "spdk_get_version", 00:06:47.074 "rpc_get_methods" 00:06:47.074 ] 00:06:47.074 14:22:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.074 14:22:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:47.074 14:22:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71802 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71802 ']' 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71802 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71802 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.074 killing process with pid 71802 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71802' 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71802 00:06:47.074 14:22:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71802 00:06:47.333 00:06:47.333 real 0m1.088s 00:06:47.333 user 0m1.939s 00:06:47.333 sys 0m0.319s 00:06:47.333 14:22:39 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.333 14:22:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.333 ************************************ 00:06:47.333 END TEST spdkcli_tcp 00:06:47.333 ************************************ 00:06:47.333 14:22:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.333 14:22:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.333 14:22:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.333 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.333 ************************************ 00:06:47.333 START TEST dpdk_mem_utility 00:06:47.333 ************************************ 00:06:47.333 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.333 * Looking for test storage... 00:06:47.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:47.333 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.333 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.333 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:47.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.592 14:22:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.592 --rc genhtml_branch_coverage=1 00:06:47.592 --rc genhtml_function_coverage=1 00:06:47.592 --rc genhtml_legend=1 00:06:47.592 --rc geninfo_all_blocks=1 00:06:47.592 --rc geninfo_unexecuted_blocks=1 00:06:47.592 00:06:47.592 ' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.592 --rc genhtml_branch_coverage=1 00:06:47.592 --rc genhtml_function_coverage=1 00:06:47.592 --rc genhtml_legend=1 00:06:47.592 --rc geninfo_all_blocks=1 00:06:47.592 --rc geninfo_unexecuted_blocks=1 00:06:47.592 00:06:47.592 ' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.592 --rc genhtml_branch_coverage=1 00:06:47.592 --rc genhtml_function_coverage=1 00:06:47.592 --rc genhtml_legend=1 00:06:47.592 --rc geninfo_all_blocks=1 00:06:47.592 --rc geninfo_unexecuted_blocks=1 00:06:47.592 00:06:47.592 ' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.592 --rc genhtml_branch_coverage=1 00:06:47.592 --rc genhtml_function_coverage=1 00:06:47.592 --rc genhtml_legend=1 00:06:47.592 --rc geninfo_all_blocks=1 00:06:47.592 --rc geninfo_unexecuted_blocks=1 00:06:47.592 00:06:47.592 ' 00:06:47.592 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:47.592 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71888 00:06:47.592 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.592 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71888 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71888 ']' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.592 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:47.592 [2024-12-16 14:22:39.654853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:47.592 [2024-12-16 14:22:39.655115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71888 ] 00:06:47.851 [2024-12-16 14:22:39.792826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.851 [2024-12-16 14:22:39.812676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.851 [2024-12-16 14:22:39.848415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.851 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.851 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:47.851 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:47.851 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:47.851 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.851 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:47.851 { 00:06:47.851 "filename": "/tmp/spdk_mem_dump.txt" 00:06:47.851 } 00:06:47.851 14:22:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.851 14:22:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:47.851 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:47.851 1 heaps totaling size 818.000000 MiB 00:06:47.851 size: 818.000000 MiB heap id: 0 00:06:47.851 end heaps---------- 00:06:47.851 9 mempools totaling size 603.782043 MiB 00:06:47.851 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:47.851 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:47.851 size: 100.555481 MiB name: bdev_io_71888 00:06:47.851 size: 50.003479 MiB name: msgpool_71888 00:06:47.851 size: 36.509338 MiB name: fsdev_io_71888 00:06:47.851 size: 21.763794 MiB name: PDU_Pool 00:06:47.851 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:47.851 size: 4.133484 MiB name: evtpool_71888 00:06:47.851 size: 0.026123 MiB name: Session_Pool 00:06:47.851 end mempools------- 00:06:47.851 6 memzones totaling size 4.142822 MiB 00:06:47.851 size: 1.000366 MiB name: RG_ring_0_71888 00:06:47.851 size: 1.000366 MiB name: RG_ring_1_71888 00:06:47.851 size: 1.000366 MiB name: RG_ring_4_71888 00:06:47.851 size: 1.000366 MiB name: RG_ring_5_71888 00:06:47.851 size: 0.125366 MiB name: RG_ring_2_71888 00:06:47.851 size: 0.015991 MiB name: RG_ring_3_71888 00:06:47.851 end memzones------- 00:06:47.851 14:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:48.111 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:06:48.111 list of free elements. size: 10.802490 MiB 00:06:48.111 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:48.111 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:48.111 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:48.111 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:48.111 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:48.111 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:48.111 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:48.111 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:48.111 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:06:48.111 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:48.111 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:48.111 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:48.111 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:48.111 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:48.111 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:48.111 list of standard malloc elements. size: 199.268616 MiB 00:06:48.111 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:48.111 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:48.111 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:48.111 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:48.111 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:48.111 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:48.111 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:48.111 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:48.111 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:48.111 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:48.111 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:48.111 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:48.112 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:48.113 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:48.113 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:48.113 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:48.113 list of memzone associated elements. size: 607.928894 MiB 00:06:48.113 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:48.113 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:48.113 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:48.113 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:48.113 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:48.113 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71888_0 00:06:48.113 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:48.113 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71888_0 00:06:48.113 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:48.113 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71888_0 00:06:48.113 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:48.113 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:48.113 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:48.113 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:48.113 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:48.113 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71888_0 00:06:48.113 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:48.113 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71888 00:06:48.113 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:48.113 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71888 00:06:48.113 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:48.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:48.113 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:48.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:48.113 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:48.113 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:48.113 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:48.113 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:48.113 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:48.113 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71888 00:06:48.113 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:48.113 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71888 00:06:48.113 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:48.113 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71888 00:06:48.113 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:48.113 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71888 00:06:48.113 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:48.113 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71888 00:06:48.113 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:48.113 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71888 00:06:48.113 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:48.113 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:48.113 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:48.113 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:48.113 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:48.113 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:48.113 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:48.113 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71888 00:06:48.113 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:48.113 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71888 00:06:48.113 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:48.113 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:48.113 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:48.113 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:48.113 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:48.114 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71888 00:06:48.114 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:48.114 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:48.114 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:48.114 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71888 00:06:48.114 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:48.114 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71888 00:06:48.114 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:48.114 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71888 00:06:48.114 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:48.114 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:48.114 14:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:48.114 14:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71888 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71888 ']' 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71888 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71888 00:06:48.114 killing process with pid 71888 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71888' 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71888 00:06:48.114 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71888 00:06:48.373 00:06:48.373 real 0m0.913s 00:06:48.373 user 0m1.014s 00:06:48.373 sys 0m0.289s 00:06:48.373 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.373 ************************************ 00:06:48.373 END TEST dpdk_mem_utility 00:06:48.373 ************************************ 00:06:48.373 14:22:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.373 14:22:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:48.373 14:22:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.373 14:22:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.373 14:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.373 ************************************ 00:06:48.373 START TEST event 00:06:48.373 ************************************ 00:06:48.373 14:22:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:48.373 * Looking for test storage... 00:06:48.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:48.373 14:22:40 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.373 14:22:40 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.373 14:22:40 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.632 14:22:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.632 14:22:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.632 14:22:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.632 14:22:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.632 14:22:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.632 14:22:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.632 14:22:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.632 14:22:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.632 14:22:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.632 14:22:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.632 14:22:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.632 14:22:40 event -- scripts/common.sh@344 -- # case "$op" in 00:06:48.632 14:22:40 event -- scripts/common.sh@345 -- # : 1 00:06:48.632 14:22:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.632 14:22:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.632 14:22:40 event -- scripts/common.sh@365 -- # decimal 1 00:06:48.632 14:22:40 event -- scripts/common.sh@353 -- # local d=1 00:06:48.632 14:22:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.632 14:22:40 event -- scripts/common.sh@355 -- # echo 1 00:06:48.632 14:22:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.632 14:22:40 event -- scripts/common.sh@366 -- # decimal 2 00:06:48.632 14:22:40 event -- scripts/common.sh@353 -- # local d=2 00:06:48.632 14:22:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.632 14:22:40 event -- scripts/common.sh@355 -- # echo 2 00:06:48.632 14:22:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.632 14:22:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.632 14:22:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.632 14:22:40 event -- scripts/common.sh@368 -- # return 0 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.632 --rc genhtml_branch_coverage=1 00:06:48.632 --rc genhtml_function_coverage=1 00:06:48.632 --rc genhtml_legend=1 00:06:48.632 --rc geninfo_all_blocks=1 00:06:48.632 --rc geninfo_unexecuted_blocks=1 00:06:48.632 00:06:48.632 ' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.632 --rc genhtml_branch_coverage=1 00:06:48.632 --rc genhtml_function_coverage=1 00:06:48.632 --rc genhtml_legend=1 00:06:48.632 --rc geninfo_all_blocks=1 00:06:48.632 --rc geninfo_unexecuted_blocks=1 00:06:48.632 00:06:48.632 ' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.632 --rc genhtml_branch_coverage=1 00:06:48.632 --rc genhtml_function_coverage=1 00:06:48.632 --rc genhtml_legend=1 00:06:48.632 --rc geninfo_all_blocks=1 00:06:48.632 --rc geninfo_unexecuted_blocks=1 00:06:48.632 00:06:48.632 ' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.632 --rc genhtml_branch_coverage=1 00:06:48.632 --rc genhtml_function_coverage=1 00:06:48.632 --rc genhtml_legend=1 00:06:48.632 --rc geninfo_all_blocks=1 00:06:48.632 --rc geninfo_unexecuted_blocks=1 00:06:48.632 00:06:48.632 ' 00:06:48.632 14:22:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.632 14:22:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.632 14:22:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:48.632 14:22:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.632 14:22:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.632 ************************************ 00:06:48.632 START TEST event_perf 00:06:48.632 ************************************ 00:06:48.632 14:22:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:48.632 Running I/O for 1 seconds...[2024-12-16 14:22:40.619582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:48.632 [2024-12-16 14:22:40.619666] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71960 ] 00:06:48.632 [2024-12-16 14:22:40.754715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.632 [2024-12-16 14:22:40.774967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.632 [2024-12-16 14:22:40.775225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.632 [2024-12-16 14:22:40.775106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.632 Running I/O for 1 seconds...[2024-12-16 14:22:40.775222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.013 00:06:50.013 lcore 0: 203874 00:06:50.013 lcore 1: 203874 00:06:50.013 lcore 2: 203874 00:06:50.013 lcore 3: 203874 00:06:50.013 done. 00:06:50.013 00:06:50.013 real 0m1.202s 00:06:50.013 user 0m4.045s 00:06:50.013 sys 0m0.039s 00:06:50.013 14:22:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.013 14:22:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.013 ************************************ 00:06:50.013 END TEST event_perf 00:06:50.013 ************************************ 00:06:50.013 14:22:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.013 14:22:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:50.013 14:22:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.013 14:22:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.013 ************************************ 00:06:50.013 START TEST event_reactor 00:06:50.013 ************************************ 00:06:50.013 14:22:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.013 [2024-12-16 14:22:41.875305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:50.013 [2024-12-16 14:22:41.875475] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71993 ] 00:06:50.013 [2024-12-16 14:22:42.014321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.013 [2024-12-16 14:22:42.033694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.950 test_start 00:06:50.950 oneshot 00:06:50.950 tick 100 00:06:50.950 tick 100 00:06:50.950 tick 250 00:06:50.950 tick 100 00:06:50.950 tick 100 00:06:50.950 tick 100 00:06:50.950 tick 250 00:06:50.950 tick 500 00:06:50.950 tick 100 00:06:50.950 tick 100 00:06:50.950 tick 250 00:06:50.950 tick 100 00:06:50.950 tick 100 00:06:50.950 test_end 00:06:50.950 00:06:50.950 real 0m1.204s 00:06:50.950 user 0m1.069s 00:06:50.950 sys 0m0.031s 00:06:50.950 14:22:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.950 ************************************ 00:06:50.950 END TEST event_reactor 00:06:50.950 ************************************ 00:06:50.950 14:22:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:50.950 14:22:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:50.950 14:22:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:50.950 14:22:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.950 14:22:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.950 ************************************ 00:06:50.950 START TEST event_reactor_perf 00:06:50.950 ************************************ 00:06:50.950 14:22:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:50.950 [2024-12-16 14:22:43.131263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:50.950 [2024-12-16 14:22:43.131354] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72028 ] 00:06:51.209 [2024-12-16 14:22:43.274739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.209 [2024-12-16 14:22:43.292030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.146 test_start 00:06:52.146 test_end 00:06:52.146 Performance: 444698 events per second 00:06:52.146 ************************************ 00:06:52.146 END TEST event_reactor_perf 00:06:52.146 ************************************ 00:06:52.146 00:06:52.146 real 0m1.208s 00:06:52.146 user 0m1.070s 00:06:52.146 sys 0m0.033s 00:06:52.146 14:22:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.146 14:22:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 14:22:44 event -- event/event.sh@49 -- # uname -s 00:06:52.405 14:22:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:52.405 14:22:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:52.405 14:22:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.405 14:22:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.405 14:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.405 ************************************ 00:06:52.405 START TEST event_scheduler 00:06:52.405 ************************************ 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:52.405 * Looking for test storage... 00:06:52.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:52.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.405 14:22:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.405 --rc genhtml_branch_coverage=1 00:06:52.405 --rc genhtml_function_coverage=1 00:06:52.405 --rc genhtml_legend=1 00:06:52.405 --rc geninfo_all_blocks=1 00:06:52.405 --rc geninfo_unexecuted_blocks=1 00:06:52.405 00:06:52.405 ' 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.405 --rc genhtml_branch_coverage=1 00:06:52.405 --rc genhtml_function_coverage=1 00:06:52.405 --rc genhtml_legend=1 00:06:52.405 --rc geninfo_all_blocks=1 00:06:52.405 --rc geninfo_unexecuted_blocks=1 00:06:52.405 00:06:52.405 ' 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.405 --rc genhtml_branch_coverage=1 00:06:52.405 --rc genhtml_function_coverage=1 00:06:52.405 --rc genhtml_legend=1 00:06:52.405 --rc geninfo_all_blocks=1 00:06:52.405 --rc geninfo_unexecuted_blocks=1 00:06:52.405 00:06:52.405 ' 00:06:52.405 14:22:44 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.405 --rc genhtml_branch_coverage=1 00:06:52.405 --rc genhtml_function_coverage=1 00:06:52.405 --rc genhtml_legend=1 00:06:52.405 --rc geninfo_all_blocks=1 00:06:52.405 --rc geninfo_unexecuted_blocks=1 00:06:52.405 00:06:52.405 ' 00:06:52.405 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:52.405 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72098 00:06:52.406 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.406 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72098 00:06:52.406 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72098 ']' 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.406 14:22:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.664 [2024-12-16 14:22:44.621532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:52.664 [2024-12-16 14:22:44.621829] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72098 ] 00:06:52.664 [2024-12-16 14:22:44.770900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.664 [2024-12-16 14:22:44.798331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.664 [2024-12-16 14:22:44.798494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.664 [2024-12-16 14:22:44.798559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.664 [2024-12-16 14:22:44.798558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:52.924 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:52.924 POWER: Cannot set governor of lcore 0 to userspace 00:06:52.924 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:52.924 POWER: Cannot set governor of lcore 0 to performance 00:06:52.924 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:52.924 POWER: Cannot set governor of lcore 0 to userspace 00:06:52.924 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:52.924 POWER: Unable to set Power Management Environment for lcore 0 00:06:52.924 [2024-12-16 14:22:44.884578] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:52.924 [2024-12-16 14:22:44.884594] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:52.924 [2024-12-16 14:22:44.884656] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:52.924 [2024-12-16 14:22:44.884675] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:52.924 [2024-12-16 14:22:44.884685] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:52.924 [2024-12-16 14:22:44.884694] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 [2024-12-16 14:22:44.923195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.924 [2024-12-16 14:22:44.941666] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 ************************************ 00:06:52.924 START TEST scheduler_create_thread 00:06:52.924 ************************************ 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 2 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 3 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 4 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 5 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 6 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 7 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 8 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 9 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 10 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.924 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.925 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.925 14:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:52.925 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.925 14:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.826 14:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.826 14:22:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:54.826 14:22:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:54.826 14:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.826 14:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.393 ************************************ 00:06:55.393 END TEST scheduler_create_thread 00:06:55.393 ************************************ 00:06:55.393 14:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.393 00:06:55.393 real 0m2.612s 00:06:55.393 user 0m0.019s 00:06:55.393 sys 0m0.006s 00:06:55.393 14:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.393 14:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.652 14:22:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:55.652 14:22:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72098 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72098 ']' 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72098 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72098 00:06:55.652 killing process with pid 72098 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72098' 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72098 00:06:55.652 14:22:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72098 00:06:55.911 [2024-12-16 14:22:48.044967] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:56.170 00:06:56.170 real 0m3.792s 00:06:56.170 user 0m5.721s 00:06:56.170 sys 0m0.299s 00:06:56.170 14:22:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.170 14:22:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.170 ************************************ 00:06:56.170 END TEST event_scheduler 00:06:56.170 ************************************ 00:06:56.170 14:22:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:56.170 14:22:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:56.170 14:22:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.170 14:22:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.170 14:22:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.170 ************************************ 00:06:56.170 START TEST app_repeat 00:06:56.170 ************************************ 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:56.170 Process app_repeat pid: 72179 00:06:56.170 spdk_app_start Round 0 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72179 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72179' 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:56.170 14:22:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72179 /var/tmp/spdk-nbd.sock 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.170 14:22:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.170 [2024-12-16 14:22:48.265892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:56.170 [2024-12-16 14:22:48.265977] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72179 ] 00:06:56.429 [2024-12-16 14:22:48.411674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.429 [2024-12-16 14:22:48.432719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.429 [2024-12-16 14:22:48.432727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.429 [2024-12-16 14:22:48.461158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.429 14:22:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.429 14:22:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:56.429 14:22:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.688 Malloc0 00:06:56.688 14:22:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.947 Malloc1 00:06:56.947 14:22:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.947 14:22:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.206 /dev/nbd0 00:06:57.206 14:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.206 14:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.206 1+0 records in 00:06:57.206 1+0 records out 00:06:57.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116698 s, 3.5 MB/s 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.206 14:22:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:57.206 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.206 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.206 14:22:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.465 /dev/nbd1 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.465 1+0 records in 00:06:57.465 1+0 records out 00:06:57.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311506 s, 13.1 MB/s 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.465 14:22:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.465 14:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.049 { 00:06:58.049 "nbd_device": "/dev/nbd0", 00:06:58.049 "bdev_name": "Malloc0" 00:06:58.049 }, 00:06:58.049 { 00:06:58.049 "nbd_device": "/dev/nbd1", 00:06:58.049 "bdev_name": "Malloc1" 00:06:58.049 } 00:06:58.049 ]' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.049 { 00:06:58.049 "nbd_device": "/dev/nbd0", 00:06:58.049 "bdev_name": "Malloc0" 00:06:58.049 }, 00:06:58.049 { 00:06:58.049 "nbd_device": "/dev/nbd1", 00:06:58.049 "bdev_name": "Malloc1" 00:06:58.049 } 00:06:58.049 ]' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.049 /dev/nbd1' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.049 /dev/nbd1' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.049 14:22:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.049 256+0 records in 00:06:58.049 256+0 records out 00:06:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541491 s, 194 MB/s 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.049 256+0 records in 00:06:58.049 256+0 records out 00:06:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206601 s, 50.8 MB/s 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.049 256+0 records in 00:06:58.049 256+0 records out 00:06:58.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251885 s, 41.6 MB/s 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.049 14:22:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.356 14:22:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.614 14:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.873 14:22:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.873 14:22:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.440 14:22:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.440 [2024-12-16 14:22:51.448952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.440 [2024-12-16 14:22:51.467764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.440 [2024-12-16 14:22:51.467769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.440 [2024-12-16 14:22:51.494392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.440 [2024-12-16 14:22:51.494505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.440 [2024-12-16 14:22:51.494519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.196 spdk_app_start Round 1 00:07:02.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.196 14:22:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.196 14:22:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:02.196 14:22:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72179 /var/tmp/spdk-nbd.sock 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.196 14:22:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.764 14:22:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.764 14:22:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:02.764 14:22:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.764 Malloc0 00:07:02.764 14:22:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.025 Malloc1 00:07:03.025 14:22:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.025 14:22:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.285 /dev/nbd0 00:07:03.285 14:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.285 14:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.285 14:22:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.544 1+0 records in 00:07:03.544 1+0 records out 00:07:03.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318827 s, 12.8 MB/s 00:07:03.544 14:22:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.544 14:22:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.544 14:22:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.544 14:22:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.544 14:22:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.544 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.544 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.544 14:22:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.804 /dev/nbd1 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.804 1+0 records in 00:07:03.804 1+0 records out 00:07:03.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291784 s, 14.0 MB/s 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.804 14:22:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.804 14:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.063 { 00:07:04.063 "nbd_device": "/dev/nbd0", 00:07:04.063 "bdev_name": "Malloc0" 00:07:04.063 }, 00:07:04.063 { 00:07:04.063 "nbd_device": "/dev/nbd1", 00:07:04.063 "bdev_name": "Malloc1" 00:07:04.063 } 00:07:04.063 ]' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.063 { 00:07:04.063 "nbd_device": "/dev/nbd0", 00:07:04.063 "bdev_name": "Malloc0" 00:07:04.063 }, 00:07:04.063 { 00:07:04.063 "nbd_device": "/dev/nbd1", 00:07:04.063 "bdev_name": "Malloc1" 00:07:04.063 } 00:07:04.063 ]' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.063 /dev/nbd1' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.063 /dev/nbd1' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.063 256+0 records in 00:07:04.063 256+0 records out 00:07:04.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00899431 s, 117 MB/s 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.063 256+0 records in 00:07:04.063 256+0 records out 00:07:04.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214771 s, 48.8 MB/s 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.063 256+0 records in 00:07:04.063 256+0 records out 00:07:04.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237432 s, 44.2 MB/s 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.063 14:22:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.632 14:22:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.891 14:22:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.892 14:22:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.892 14:22:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.151 14:22:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.151 14:22:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.410 14:22:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.410 [2024-12-16 14:22:57.587045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.410 [2024-12-16 14:22:57.605991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.410 [2024-12-16 14:22:57.606002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.669 [2024-12-16 14:22:57.637380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.669 [2024-12-16 14:22:57.637496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.669 [2024-12-16 14:22:57.637509] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.956 spdk_app_start Round 2 00:07:08.956 14:23:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:08.956 14:23:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:08.956 14:23:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72179 /var/tmp/spdk-nbd.sock 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.956 14:23:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:08.956 14:23:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.956 Malloc0 00:07:08.956 14:23:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:09.215 Malloc1 00:07:09.215 14:23:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.215 14:23:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:09.473 /dev/nbd0 00:07:09.473 14:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.473 14:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.473 1+0 records in 00:07:09.473 1+0 records out 00:07:09.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395499 s, 10.4 MB/s 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.473 14:23:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:09.473 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.473 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.473 14:23:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:09.732 /dev/nbd1 00:07:09.732 14:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.732 14:23:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.732 14:23:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:09.732 14:23:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:09.732 14:23:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:09.733 1+0 records in 00:07:09.733 1+0 records out 00:07:09.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253607 s, 16.2 MB/s 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:09.733 14:23:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:09.992 14:23:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.992 14:23:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:09.992 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.992 14:23:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:09.992 14:23:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.992 14:23:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.992 14:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:10.251 { 00:07:10.251 "nbd_device": "/dev/nbd0", 00:07:10.251 "bdev_name": "Malloc0" 00:07:10.251 }, 00:07:10.251 { 00:07:10.251 "nbd_device": "/dev/nbd1", 00:07:10.251 "bdev_name": "Malloc1" 00:07:10.251 } 00:07:10.251 ]' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:10.251 { 00:07:10.251 "nbd_device": "/dev/nbd0", 00:07:10.251 "bdev_name": "Malloc0" 00:07:10.251 }, 00:07:10.251 { 00:07:10.251 "nbd_device": "/dev/nbd1", 00:07:10.251 "bdev_name": "Malloc1" 00:07:10.251 } 00:07:10.251 ]' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:10.251 /dev/nbd1' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:10.251 /dev/nbd1' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:10.251 256+0 records in 00:07:10.251 256+0 records out 00:07:10.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115079 s, 91.1 MB/s 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:10.251 256+0 records in 00:07:10.251 256+0 records out 00:07:10.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022653 s, 46.3 MB/s 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:10.251 256+0 records in 00:07:10.251 256+0 records out 00:07:10.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247856 s, 42.3 MB/s 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.251 14:23:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.511 14:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.772 14:23:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.033 14:23:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.033 14:23:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.033 14:23:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.033 14:23:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.033 14:23:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.292 14:23:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.292 14:23:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:11.551 14:23:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:11.810 [2024-12-16 14:23:03.764374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.810 [2024-12-16 14:23:03.784849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.810 [2024-12-16 14:23:03.784861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.810 [2024-12-16 14:23:03.812448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.810 [2024-12-16 14:23:03.812587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:11.810 [2024-12-16 14:23:03.812600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:15.097 14:23:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72179 /var/tmp/spdk-nbd.sock 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72179 ']' 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:15.097 14:23:06 event.app_repeat -- event/event.sh@39 -- # killprocess 72179 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72179 ']' 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72179 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.097 14:23:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72179 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.097 killing process with pid 72179 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72179' 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72179 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72179 00:07:15.097 spdk_app_start is called in Round 0. 00:07:15.097 Shutdown signal received, stop current app iteration 00:07:15.097 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:07:15.097 spdk_app_start is called in Round 1. 00:07:15.097 Shutdown signal received, stop current app iteration 00:07:15.097 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:07:15.097 spdk_app_start is called in Round 2. 00:07:15.097 Shutdown signal received, stop current app iteration 00:07:15.097 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:07:15.097 spdk_app_start is called in Round 3. 00:07:15.097 Shutdown signal received, stop current app iteration 00:07:15.097 14:23:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:15.097 14:23:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:15.097 00:07:15.097 real 0m18.872s 00:07:15.097 user 0m43.570s 00:07:15.097 sys 0m2.698s 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.097 14:23:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.097 ************************************ 00:07:15.097 END TEST app_repeat 00:07:15.097 ************************************ 00:07:15.097 14:23:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:15.097 14:23:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:15.097 14:23:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.097 14:23:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.097 14:23:07 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.097 ************************************ 00:07:15.097 START TEST cpu_locks 00:07:15.097 ************************************ 00:07:15.097 14:23:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:15.097 * Looking for test storage... 00:07:15.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:15.097 14:23:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.097 14:23:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.097 14:23:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.356 14:23:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.356 14:23:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:15.356 14:23:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.357 --rc genhtml_branch_coverage=1 00:07:15.357 --rc genhtml_function_coverage=1 00:07:15.357 --rc genhtml_legend=1 00:07:15.357 --rc geninfo_all_blocks=1 00:07:15.357 --rc geninfo_unexecuted_blocks=1 00:07:15.357 00:07:15.357 ' 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.357 --rc genhtml_branch_coverage=1 00:07:15.357 --rc genhtml_function_coverage=1 00:07:15.357 --rc genhtml_legend=1 00:07:15.357 --rc geninfo_all_blocks=1 00:07:15.357 --rc geninfo_unexecuted_blocks=1 00:07:15.357 00:07:15.357 ' 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.357 --rc genhtml_branch_coverage=1 00:07:15.357 --rc genhtml_function_coverage=1 00:07:15.357 --rc genhtml_legend=1 00:07:15.357 --rc geninfo_all_blocks=1 00:07:15.357 --rc geninfo_unexecuted_blocks=1 00:07:15.357 00:07:15.357 ' 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.357 --rc genhtml_branch_coverage=1 00:07:15.357 --rc genhtml_function_coverage=1 00:07:15.357 --rc genhtml_legend=1 00:07:15.357 --rc geninfo_all_blocks=1 00:07:15.357 --rc geninfo_unexecuted_blocks=1 00:07:15.357 00:07:15.357 ' 00:07:15.357 14:23:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:15.357 14:23:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:15.357 14:23:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:15.357 14:23:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.357 14:23:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.357 ************************************ 00:07:15.357 START TEST default_locks 00:07:15.357 ************************************ 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72623 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72623 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72623 ']' 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.357 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.357 [2024-12-16 14:23:07.433776] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:15.357 [2024-12-16 14:23:07.433924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72623 ] 00:07:15.624 [2024-12-16 14:23:07.576278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.624 [2024-12-16 14:23:07.595745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.624 [2024-12-16 14:23:07.633267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.624 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.624 14:23:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:15.624 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72623 00:07:15.624 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.624 14:23:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72623 ']' 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.911 killing process with pid 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72623' 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72623 00:07:15.911 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72623 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72623 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72623 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72623 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72623 ']' 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.170 ERROR: process (pid: 72623) is no longer running 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72623) - No such process 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:16.170 00:07:16.170 real 0m0.940s 00:07:16.170 user 0m0.959s 00:07:16.170 sys 0m0.381s 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.170 14:23:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.170 ************************************ 00:07:16.170 END TEST default_locks 00:07:16.170 ************************************ 00:07:16.170 14:23:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:16.170 14:23:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.170 14:23:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.170 14:23:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.170 ************************************ 00:07:16.170 START TEST default_locks_via_rpc 00:07:16.170 ************************************ 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72664 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72664 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72664 ']' 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.170 14:23:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.429 [2024-12-16 14:23:08.424749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:16.429 [2024-12-16 14:23:08.424889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72664 ] 00:07:16.429 [2024-12-16 14:23:08.566836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.429 [2024-12-16 14:23:08.588630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.429 [2024-12-16 14:23:08.626499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72664 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.364 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72664 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72664 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72664 ']' 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72664 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72664 00:07:17.932 killing process with pid 72664 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72664' 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72664 00:07:17.932 14:23:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72664 00:07:18.191 00:07:18.191 real 0m1.779s 00:07:18.191 user 0m2.037s 00:07:18.191 sys 0m0.475s 00:07:18.191 14:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.191 ************************************ 00:07:18.191 14:23:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.191 END TEST default_locks_via_rpc 00:07:18.191 ************************************ 00:07:18.191 14:23:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:18.191 14:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.191 14:23:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.191 14:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.191 ************************************ 00:07:18.191 START TEST non_locking_app_on_locked_coremask 00:07:18.191 ************************************ 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72715 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72715 /var/tmp/spdk.sock 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72715 ']' 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.191 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.191 [2024-12-16 14:23:10.263593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:18.191 [2024-12-16 14:23:10.263722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72715 ] 00:07:18.450 [2024-12-16 14:23:10.412082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.450 [2024-12-16 14:23:10.432551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.450 [2024-12-16 14:23:10.467617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72718 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72718 /var/tmp/spdk2.sock 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72718 ']' 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.450 14:23:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.709 [2024-12-16 14:23:10.649464] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:18.709 [2024-12-16 14:23:10.649862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72718 ] 00:07:18.709 [2024-12-16 14:23:10.812291] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.709 [2024-12-16 14:23:10.812373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.709 [2024-12-16 14:23:10.855560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.967 [2024-12-16 14:23:10.926557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.534 14:23:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.534 14:23:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.534 14:23:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72715 00:07:19.534 14:23:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72715 00:07:19.534 14:23:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.470 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72715 00:07:20.470 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72715 ']' 00:07:20.470 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72715 00:07:20.470 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.470 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72715 00:07:20.471 killing process with pid 72715 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72715' 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72715 00:07:20.471 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72715 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72718 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72718 ']' 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72718 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.038 14:23:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72718 00:07:21.038 killing process with pid 72718 00:07:21.038 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.038 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.038 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72718' 00:07:21.038 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72718 00:07:21.038 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72718 00:07:21.296 ************************************ 00:07:21.297 END TEST non_locking_app_on_locked_coremask 00:07:21.297 ************************************ 00:07:21.297 00:07:21.297 real 0m3.045s 00:07:21.297 user 0m3.577s 00:07:21.297 sys 0m0.905s 00:07:21.297 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.297 14:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.297 14:23:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:21.297 14:23:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.297 14:23:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.297 14:23:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.297 ************************************ 00:07:21.297 START TEST locking_app_on_unlocked_coremask 00:07:21.297 ************************************ 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:21.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72786 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72786 /var/tmp/spdk.sock 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72786 ']' 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.297 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.297 [2024-12-16 14:23:13.355134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:21.297 [2024-12-16 14:23:13.355256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72786 ] 00:07:21.556 [2024-12-16 14:23:13.501669] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.556 [2024-12-16 14:23:13.501740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.556 [2024-12-16 14:23:13.521626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.556 [2024-12-16 14:23:13.556497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72789 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72789 /var/tmp/spdk2.sock 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72789 ']' 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.556 14:23:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.556 [2024-12-16 14:23:13.738545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:21.556 [2024-12-16 14:23:13.738667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72789 ] 00:07:21.815 [2024-12-16 14:23:13.900387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.815 [2024-12-16 14:23:13.940905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.815 [2024-12-16 14:23:14.012431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.073 14:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.073 14:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.073 14:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72789 00:07:22.073 14:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72789 00:07:22.073 14:23:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72786 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72786 ']' 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72786 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72786 00:07:23.009 killing process with pid 72786 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72786' 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72786 00:07:23.009 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72786 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72789 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72789 ']' 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72789 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72789 00:07:23.575 killing process with pid 72789 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72789' 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72789 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72789 00:07:23.575 00:07:23.575 real 0m2.474s 00:07:23.575 user 0m2.834s 00:07:23.575 sys 0m0.866s 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.575 ************************************ 00:07:23.575 END TEST locking_app_on_unlocked_coremask 00:07:23.575 ************************************ 00:07:23.575 14:23:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.834 14:23:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:23.834 14:23:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.834 14:23:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.834 14:23:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.834 ************************************ 00:07:23.834 START TEST locking_app_on_locked_coremask 00:07:23.834 ************************************ 00:07:23.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72843 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72843 /var/tmp/spdk.sock 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72843 ']' 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.834 14:23:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.834 [2024-12-16 14:23:15.874604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:23.834 [2024-12-16 14:23:15.875362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72843 ] 00:07:23.834 [2024-12-16 14:23:16.027312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.093 [2024-12-16 14:23:16.052608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.093 [2024-12-16 14:23:16.096110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72852 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72852 /var/tmp/spdk2.sock 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72852 /var/tmp/spdk2.sock 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72852 /var/tmp/spdk2.sock 00:07:24.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72852 ']' 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.093 14:23:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.352 [2024-12-16 14:23:16.294060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:24.352 [2024-12-16 14:23:16.294339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72852 ] 00:07:24.352 [2024-12-16 14:23:16.461656] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72843 has claimed it. 00:07:24.352 [2024-12-16 14:23:16.461741] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.919 ERROR: process (pid: 72852) is no longer running 00:07:24.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72852) - No such process 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72843 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.919 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72843 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72843 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72843 ']' 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72843 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72843 00:07:25.487 killing process with pid 72843 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72843' 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72843 00:07:25.487 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72843 00:07:25.745 00:07:25.745 real 0m1.891s 00:07:25.745 user 0m2.239s 00:07:25.745 sys 0m0.520s 00:07:25.745 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.745 ************************************ 00:07:25.745 END TEST locking_app_on_locked_coremask 00:07:25.745 ************************************ 00:07:25.745 14:23:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.745 14:23:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:25.745 14:23:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.745 14:23:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.745 14:23:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.745 ************************************ 00:07:25.745 START TEST locking_overlapped_coremask 00:07:25.745 ************************************ 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:25.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72897 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72897 /var/tmp/spdk.sock 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72897 ']' 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.745 14:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.745 [2024-12-16 14:23:17.823726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:25.745 [2024-12-16 14:23:17.823828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72897 ] 00:07:26.004 [2024-12-16 14:23:17.964937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.004 [2024-12-16 14:23:17.988172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.004 [2024-12-16 14:23:17.988338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.004 [2024-12-16 14:23:17.988330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.004 [2024-12-16 14:23:18.028413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72903 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72903 /var/tmp/spdk2.sock 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72903 /var/tmp/spdk2.sock 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72903 /var/tmp/spdk2.sock 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72903 ']' 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.004 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.280 [2024-12-16 14:23:18.225247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:26.281 [2024-12-16 14:23:18.225762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72903 ] 00:07:26.281 [2024-12-16 14:23:18.385231] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72897 has claimed it. 00:07:26.281 [2024-12-16 14:23:18.385333] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.848 ERROR: process (pid: 72903) is no longer running 00:07:26.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72903) - No such process 00:07:26.848 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.848 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72897 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72897 ']' 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72897 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72897 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72897' 00:07:26.849 killing process with pid 72897 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72897 00:07:26.849 14:23:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72897 00:07:27.108 00:07:27.108 real 0m1.469s 00:07:27.108 user 0m4.077s 00:07:27.108 sys 0m0.310s 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 ************************************ 00:07:27.108 END TEST locking_overlapped_coremask 00:07:27.108 ************************************ 00:07:27.108 14:23:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.108 14:23:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.108 14:23:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.108 14:23:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 ************************************ 00:07:27.108 START TEST locking_overlapped_coremask_via_rpc 00:07:27.108 ************************************ 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72948 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72948 /var/tmp/spdk.sock 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72948 ']' 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.108 14:23:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.368 [2024-12-16 14:23:19.329413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:27.368 [2024-12-16 14:23:19.329513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72948 ] 00:07:27.368 [2024-12-16 14:23:19.470176] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.368 [2024-12-16 14:23:19.470352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.368 [2024-12-16 14:23:19.493611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.368 [2024-12-16 14:23:19.493725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.368 [2024-12-16 14:23:19.493731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.368 [2024-12-16 14:23:19.531025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72966 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72966 /var/tmp/spdk2.sock 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72966 ']' 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.305 14:23:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 [2024-12-16 14:23:20.352364] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:28.305 [2024-12-16 14:23:20.352714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72966 ] 00:07:28.564 [2024-12-16 14:23:20.514679] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.564 [2024-12-16 14:23:20.514929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.564 [2024-12-16 14:23:20.559373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.564 [2024-12-16 14:23:20.562559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.564 [2024-12-16 14:23:20.562559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.564 [2024-12-16 14:23:20.633308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.132 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.133 [2024-12-16 14:23:21.308631] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72948 has claimed it. 00:07:29.133 request: 00:07:29.133 { 00:07:29.133 "method": "framework_enable_cpumask_locks", 00:07:29.133 "req_id": 1 00:07:29.133 } 00:07:29.133 Got JSON-RPC error response 00:07:29.133 response: 00:07:29.133 { 00:07:29.133 "code": -32603, 00:07:29.133 "message": "Failed to claim CPU core: 2" 00:07:29.133 } 00:07:29.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72948 /var/tmp/spdk.sock 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72948 ']' 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.133 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72966 /var/tmp/spdk2.sock 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72966 ']' 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.392 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:29.960 ************************************ 00:07:29.960 END TEST locking_overlapped_coremask_via_rpc 00:07:29.960 ************************************ 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.960 00:07:29.960 real 0m2.642s 00:07:29.960 user 0m1.399s 00:07:29.960 sys 0m0.171s 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.960 14:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.960 14:23:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:29.960 14:23:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72948 ]] 00:07:29.960 14:23:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72948 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72948 ']' 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72948 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72948 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.960 14:23:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.961 killing process with pid 72948 00:07:29.961 14:23:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72948' 00:07:29.961 14:23:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72948 00:07:29.961 14:23:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72948 00:07:30.219 14:23:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72966 ]] 00:07:30.220 14:23:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72966 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72966 ']' 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72966 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72966 00:07:30.220 killing process with pid 72966 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72966' 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72966 00:07:30.220 14:23:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72966 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.479 Process with pid 72948 is not found 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72948 ]] 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72948 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72948 ']' 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72948 00:07:30.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72948) - No such process 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72948 is not found' 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72966 ]] 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72966 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72966 ']' 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72966 00:07:30.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72966) - No such process 00:07:30.479 Process with pid 72966 is not found 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72966 is not found' 00:07:30.479 14:23:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.479 00:07:30.479 real 0m15.354s 00:07:30.479 user 0m29.391s 00:07:30.479 sys 0m4.311s 00:07:30.479 ************************************ 00:07:30.479 END TEST cpu_locks 00:07:30.479 ************************************ 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.479 14:23:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.479 ************************************ 00:07:30.479 END TEST event 00:07:30.479 ************************************ 00:07:30.479 00:07:30.479 real 0m42.141s 00:07:30.479 user 1m25.083s 00:07:30.479 sys 0m7.676s 00:07:30.479 14:23:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.479 14:23:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.479 14:23:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.479 14:23:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.479 14:23:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.479 14:23:22 -- common/autotest_common.sh@10 -- # set +x 00:07:30.479 ************************************ 00:07:30.479 START TEST thread 00:07:30.479 ************************************ 00:07:30.479 14:23:22 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:30.479 * Looking for test storage... 00:07:30.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:30.738 14:23:22 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.738 14:23:22 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.738 14:23:22 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.738 14:23:22 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.738 14:23:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.738 14:23:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.738 14:23:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.738 14:23:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.738 14:23:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.738 14:23:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.738 14:23:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.738 14:23:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.738 14:23:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.738 14:23:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.738 14:23:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.738 14:23:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:30.738 14:23:22 thread -- scripts/common.sh@345 -- # : 1 00:07:30.738 14:23:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.738 14:23:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.739 14:23:22 thread -- scripts/common.sh@365 -- # decimal 1 00:07:30.739 14:23:22 thread -- scripts/common.sh@353 -- # local d=1 00:07:30.739 14:23:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.739 14:23:22 thread -- scripts/common.sh@355 -- # echo 1 00:07:30.739 14:23:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.739 14:23:22 thread -- scripts/common.sh@366 -- # decimal 2 00:07:30.739 14:23:22 thread -- scripts/common.sh@353 -- # local d=2 00:07:30.739 14:23:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.739 14:23:22 thread -- scripts/common.sh@355 -- # echo 2 00:07:30.739 14:23:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.739 14:23:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.739 14:23:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.739 14:23:22 thread -- scripts/common.sh@368 -- # return 0 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.739 --rc genhtml_branch_coverage=1 00:07:30.739 --rc genhtml_function_coverage=1 00:07:30.739 --rc genhtml_legend=1 00:07:30.739 --rc geninfo_all_blocks=1 00:07:30.739 --rc geninfo_unexecuted_blocks=1 00:07:30.739 00:07:30.739 ' 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.739 --rc genhtml_branch_coverage=1 00:07:30.739 --rc genhtml_function_coverage=1 00:07:30.739 --rc genhtml_legend=1 00:07:30.739 --rc geninfo_all_blocks=1 00:07:30.739 --rc geninfo_unexecuted_blocks=1 00:07:30.739 00:07:30.739 ' 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.739 --rc genhtml_branch_coverage=1 00:07:30.739 --rc genhtml_function_coverage=1 00:07:30.739 --rc genhtml_legend=1 00:07:30.739 --rc geninfo_all_blocks=1 00:07:30.739 --rc geninfo_unexecuted_blocks=1 00:07:30.739 00:07:30.739 ' 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.739 --rc genhtml_branch_coverage=1 00:07:30.739 --rc genhtml_function_coverage=1 00:07:30.739 --rc genhtml_legend=1 00:07:30.739 --rc geninfo_all_blocks=1 00:07:30.739 --rc geninfo_unexecuted_blocks=1 00:07:30.739 00:07:30.739 ' 00:07:30.739 14:23:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.739 14:23:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 ************************************ 00:07:30.739 START TEST thread_poller_perf 00:07:30.739 ************************************ 00:07:30.739 14:23:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:30.739 [2024-12-16 14:23:22.806334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:30.739 [2024-12-16 14:23:22.806638] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73098 ] 00:07:30.998 [2024-12-16 14:23:22.950552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.998 [2024-12-16 14:23:22.973227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.998 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:31.934 [2024-12-16T14:23:24.134Z] ====================================== 00:07:31.934 [2024-12-16T14:23:24.134Z] busy:2210843227 (cyc) 00:07:31.934 [2024-12-16T14:23:24.134Z] total_run_count: 355000 00:07:31.934 [2024-12-16T14:23:24.134Z] tsc_hz: 2200000000 (cyc) 00:07:31.934 [2024-12-16T14:23:24.134Z] ====================================== 00:07:31.934 [2024-12-16T14:23:24.134Z] poller_cost: 6227 (cyc), 2830 (nsec) 00:07:31.934 00:07:31.934 ************************************ 00:07:31.934 END TEST thread_poller_perf 00:07:31.934 ************************************ 00:07:31.934 real 0m1.229s 00:07:31.934 user 0m1.083s 00:07:31.934 sys 0m0.039s 00:07:31.934 14:23:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.934 14:23:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:31.934 14:23:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.934 14:23:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:31.934 14:23:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.934 14:23:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.934 ************************************ 00:07:31.934 START TEST thread_poller_perf 00:07:31.934 ************************************ 00:07:31.934 14:23:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:31.934 [2024-12-16 14:23:24.085416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:31.934 [2024-12-16 14:23:24.085550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:07:32.193 [2024-12-16 14:23:24.229815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.193 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.193 [2024-12-16 14:23:24.251806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.128 [2024-12-16T14:23:25.328Z] ====================================== 00:07:33.128 [2024-12-16T14:23:25.328Z] busy:2201612034 (cyc) 00:07:33.128 [2024-12-16T14:23:25.328Z] total_run_count: 4490000 00:07:33.128 [2024-12-16T14:23:25.328Z] tsc_hz: 2200000000 (cyc) 00:07:33.128 [2024-12-16T14:23:25.328Z] ====================================== 00:07:33.128 [2024-12-16T14:23:25.328Z] poller_cost: 490 (cyc), 222 (nsec) 00:07:33.128 00:07:33.128 real 0m1.213s 00:07:33.128 user 0m1.075s 00:07:33.128 sys 0m0.032s 00:07:33.128 ************************************ 00:07:33.128 END TEST thread_poller_perf 00:07:33.128 ************************************ 00:07:33.128 14:23:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.128 14:23:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.128 14:23:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:33.387 ************************************ 00:07:33.387 END TEST thread 00:07:33.387 ************************************ 00:07:33.387 00:07:33.387 real 0m2.728s 00:07:33.387 user 0m2.306s 00:07:33.387 sys 0m0.203s 00:07:33.387 14:23:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.387 14:23:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.387 14:23:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:33.387 14:23:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.387 14:23:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.387 14:23:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.387 14:23:25 -- common/autotest_common.sh@10 -- # set +x 00:07:33.387 ************************************ 00:07:33.387 START TEST app_cmdline 00:07:33.387 ************************************ 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:33.387 * Looking for test storage... 00:07:33.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.387 14:23:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.387 --rc genhtml_branch_coverage=1 00:07:33.387 --rc genhtml_function_coverage=1 00:07:33.387 --rc genhtml_legend=1 00:07:33.387 --rc geninfo_all_blocks=1 00:07:33.387 --rc geninfo_unexecuted_blocks=1 00:07:33.387 00:07:33.387 ' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.387 --rc genhtml_branch_coverage=1 00:07:33.387 --rc genhtml_function_coverage=1 00:07:33.387 --rc genhtml_legend=1 00:07:33.387 --rc geninfo_all_blocks=1 00:07:33.387 --rc geninfo_unexecuted_blocks=1 00:07:33.387 00:07:33.387 ' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.387 --rc genhtml_branch_coverage=1 00:07:33.387 --rc genhtml_function_coverage=1 00:07:33.387 --rc genhtml_legend=1 00:07:33.387 --rc geninfo_all_blocks=1 00:07:33.387 --rc geninfo_unexecuted_blocks=1 00:07:33.387 00:07:33.387 ' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.387 --rc genhtml_branch_coverage=1 00:07:33.387 --rc genhtml_function_coverage=1 00:07:33.387 --rc genhtml_legend=1 00:07:33.387 --rc geninfo_all_blocks=1 00:07:33.387 --rc geninfo_unexecuted_blocks=1 00:07:33.387 00:07:33.387 ' 00:07:33.387 14:23:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.387 14:23:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73211 00:07:33.387 14:23:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.387 14:23:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73211 00:07:33.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73211 ']' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.387 14:23:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.646 [2024-12-16 14:23:25.633574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:33.646 [2024-12-16 14:23:25.633675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:07:33.646 [2024-12-16 14:23:25.780617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.646 [2024-12-16 14:23:25.801008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.646 [2024-12-16 14:23:25.836938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.904 14:23:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.904 14:23:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:33.904 14:23:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:34.163 { 00:07:34.163 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:07:34.163 "fields": { 00:07:34.163 "major": 25, 00:07:34.163 "minor": 1, 00:07:34.163 "patch": 0, 00:07:34.163 "suffix": "-pre", 00:07:34.163 "commit": "e01cb43b8" 00:07:34.163 } 00:07:34.163 } 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.163 14:23:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:34.163 14:23:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.422 request: 00:07:34.422 { 00:07:34.422 "method": "env_dpdk_get_mem_stats", 00:07:34.422 "req_id": 1 00:07:34.422 } 00:07:34.422 Got JSON-RPC error response 00:07:34.422 response: 00:07:34.422 { 00:07:34.422 "code": -32601, 00:07:34.422 "message": "Method not found" 00:07:34.422 } 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.422 14:23:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73211 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73211 ']' 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73211 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73211 00:07:34.422 killing process with pid 73211 00:07:34.422 14:23:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.423 14:23:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.423 14:23:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73211' 00:07:34.423 14:23:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 73211 00:07:34.423 14:23:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 73211 00:07:34.682 ************************************ 00:07:34.682 END TEST app_cmdline 00:07:34.682 ************************************ 00:07:34.682 00:07:34.682 real 0m1.450s 00:07:34.682 user 0m1.919s 00:07:34.682 sys 0m0.370s 00:07:34.682 14:23:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.682 14:23:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.682 14:23:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.682 14:23:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.682 14:23:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.682 14:23:26 -- common/autotest_common.sh@10 -- # set +x 00:07:34.941 ************************************ 00:07:34.941 START TEST version 00:07:34.941 ************************************ 00:07:34.941 14:23:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:34.941 * Looking for test storage... 00:07:34.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:34.941 14:23:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:34.941 14:23:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:34.941 14:23:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:34.941 14:23:27 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:34.941 14:23:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.941 14:23:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.941 14:23:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.941 14:23:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.941 14:23:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.941 14:23:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.941 14:23:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.941 14:23:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.941 14:23:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.941 14:23:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.941 14:23:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.941 14:23:27 version -- scripts/common.sh@344 -- # case "$op" in 00:07:34.941 14:23:27 version -- scripts/common.sh@345 -- # : 1 00:07:34.941 14:23:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.941 14:23:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.941 14:23:27 version -- scripts/common.sh@365 -- # decimal 1 00:07:34.941 14:23:27 version -- scripts/common.sh@353 -- # local d=1 00:07:34.941 14:23:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.941 14:23:27 version -- scripts/common.sh@355 -- # echo 1 00:07:34.941 14:23:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.941 14:23:27 version -- scripts/common.sh@366 -- # decimal 2 00:07:34.941 14:23:27 version -- scripts/common.sh@353 -- # local d=2 00:07:34.941 14:23:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.941 14:23:27 version -- scripts/common.sh@355 -- # echo 2 00:07:34.941 14:23:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.941 14:23:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.941 14:23:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.941 14:23:27 version -- scripts/common.sh@368 -- # return 0 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.942 --rc genhtml_branch_coverage=1 00:07:34.942 --rc genhtml_function_coverage=1 00:07:34.942 --rc genhtml_legend=1 00:07:34.942 --rc geninfo_all_blocks=1 00:07:34.942 --rc geninfo_unexecuted_blocks=1 00:07:34.942 00:07:34.942 ' 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.942 --rc genhtml_branch_coverage=1 00:07:34.942 --rc genhtml_function_coverage=1 00:07:34.942 --rc genhtml_legend=1 00:07:34.942 --rc geninfo_all_blocks=1 00:07:34.942 --rc geninfo_unexecuted_blocks=1 00:07:34.942 00:07:34.942 ' 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.942 --rc genhtml_branch_coverage=1 00:07:34.942 --rc genhtml_function_coverage=1 00:07:34.942 --rc genhtml_legend=1 00:07:34.942 --rc geninfo_all_blocks=1 00:07:34.942 --rc geninfo_unexecuted_blocks=1 00:07:34.942 00:07:34.942 ' 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.942 --rc genhtml_branch_coverage=1 00:07:34.942 --rc genhtml_function_coverage=1 00:07:34.942 --rc genhtml_legend=1 00:07:34.942 --rc geninfo_all_blocks=1 00:07:34.942 --rc geninfo_unexecuted_blocks=1 00:07:34.942 00:07:34.942 ' 00:07:34.942 14:23:27 version -- app/version.sh@17 -- # get_header_version major 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # cut -f2 00:07:34.942 14:23:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.942 14:23:27 version -- app/version.sh@17 -- # major=25 00:07:34.942 14:23:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:34.942 14:23:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # cut -f2 00:07:34.942 14:23:27 version -- app/version.sh@18 -- # minor=1 00:07:34.942 14:23:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:34.942 14:23:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # cut -f2 00:07:34.942 14:23:27 version -- app/version.sh@19 -- # patch=0 00:07:34.942 14:23:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:34.942 14:23:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # cut -f2 00:07:34.942 14:23:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:34.942 14:23:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:34.942 14:23:27 version -- app/version.sh@22 -- # version=25.1 00:07:34.942 14:23:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:34.942 14:23:27 version -- app/version.sh@28 -- # version=25.1rc0 00:07:34.942 14:23:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:34.942 14:23:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:34.942 14:23:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:34.942 14:23:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:34.942 00:07:34.942 real 0m0.250s 00:07:34.942 user 0m0.153s 00:07:34.942 sys 0m0.132s 00:07:34.942 ************************************ 00:07:34.942 END TEST version 00:07:34.942 ************************************ 00:07:34.942 14:23:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.942 14:23:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.201 14:23:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.201 14:23:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.201 14:23:27 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.201 14:23:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.201 14:23:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.201 14:23:27 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:35.201 14:23:27 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:35.201 14:23:27 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.202 14:23:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.202 14:23:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.202 14:23:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.202 ************************************ 00:07:35.202 START TEST spdk_dd 00:07:35.202 ************************************ 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.202 * Looking for test storage... 00:07:35.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.202 --rc genhtml_branch_coverage=1 00:07:35.202 --rc genhtml_function_coverage=1 00:07:35.202 --rc genhtml_legend=1 00:07:35.202 --rc geninfo_all_blocks=1 00:07:35.202 --rc geninfo_unexecuted_blocks=1 00:07:35.202 00:07:35.202 ' 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.202 --rc genhtml_branch_coverage=1 00:07:35.202 --rc genhtml_function_coverage=1 00:07:35.202 --rc genhtml_legend=1 00:07:35.202 --rc geninfo_all_blocks=1 00:07:35.202 --rc geninfo_unexecuted_blocks=1 00:07:35.202 00:07:35.202 ' 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.202 --rc genhtml_branch_coverage=1 00:07:35.202 --rc genhtml_function_coverage=1 00:07:35.202 --rc genhtml_legend=1 00:07:35.202 --rc geninfo_all_blocks=1 00:07:35.202 --rc geninfo_unexecuted_blocks=1 00:07:35.202 00:07:35.202 ' 00:07:35.202 14:23:27 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.202 --rc genhtml_branch_coverage=1 00:07:35.202 --rc genhtml_function_coverage=1 00:07:35.202 --rc genhtml_legend=1 00:07:35.202 --rc geninfo_all_blocks=1 00:07:35.202 --rc geninfo_unexecuted_blocks=1 00:07:35.202 00:07:35.202 ' 00:07:35.202 14:23:27 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.202 14:23:27 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.202 14:23:27 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.202 14:23:27 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.202 14:23:27 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.202 14:23:27 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:35.202 14:23:27 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.202 14:23:27 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.721 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:35.721 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:35.721 14:23:27 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:35.721 14:23:27 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:35.721 14:23:27 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:35.721 14:23:27 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:35.721 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:35.722 * spdk_dd linked to liburing 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:35.722 14:23:27 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:35.722 14:23:27 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:35.723 14:23:27 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:35.723 14:23:27 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:35.723 14:23:27 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:35.723 14:23:27 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:35.723 14:23:27 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:35.723 14:23:27 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:35.723 14:23:27 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:35.723 14:23:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:35.723 14:23:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.723 14:23:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 ************************************ 00:07:35.723 START TEST spdk_dd_basic_rw 00:07:35.723 ************************************ 00:07:35.723 14:23:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:35.723 * Looking for test storage... 00:07:35.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.982 14:23:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.982 14:23:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.982 14:23:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.982 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.983 --rc genhtml_branch_coverage=1 00:07:35.983 --rc genhtml_function_coverage=1 00:07:35.983 --rc genhtml_legend=1 00:07:35.983 --rc geninfo_all_blocks=1 00:07:35.983 --rc geninfo_unexecuted_blocks=1 00:07:35.983 00:07:35.983 ' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.983 --rc genhtml_branch_coverage=1 00:07:35.983 --rc genhtml_function_coverage=1 00:07:35.983 --rc genhtml_legend=1 00:07:35.983 --rc geninfo_all_blocks=1 00:07:35.983 --rc geninfo_unexecuted_blocks=1 00:07:35.983 00:07:35.983 ' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.983 --rc genhtml_branch_coverage=1 00:07:35.983 --rc genhtml_function_coverage=1 00:07:35.983 --rc genhtml_legend=1 00:07:35.983 --rc geninfo_all_blocks=1 00:07:35.983 --rc geninfo_unexecuted_blocks=1 00:07:35.983 00:07:35.983 ' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.983 --rc genhtml_branch_coverage=1 00:07:35.983 --rc genhtml_function_coverage=1 00:07:35.983 --rc genhtml_legend=1 00:07:35.983 --rc geninfo_all_blocks=1 00:07:35.983 --rc geninfo_unexecuted_blocks=1 00:07:35.983 00:07:35.983 ' 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:35.983 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:36.245 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:36.245 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.246 ************************************ 00:07:36.246 START TEST dd_bs_lt_native_bs 00:07:36.246 ************************************ 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.246 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.246 { 00:07:36.246 "subsystems": [ 00:07:36.246 { 00:07:36.246 "subsystem": "bdev", 00:07:36.246 "config": [ 00:07:36.246 { 00:07:36.246 "params": { 00:07:36.246 "trtype": "pcie", 00:07:36.246 "traddr": "0000:00:10.0", 00:07:36.246 "name": "Nvme0" 00:07:36.246 }, 00:07:36.246 "method": "bdev_nvme_attach_controller" 00:07:36.246 }, 00:07:36.246 { 00:07:36.246 "method": "bdev_wait_for_examine" 00:07:36.246 } 00:07:36.246 ] 00:07:36.246 } 00:07:36.246 ] 00:07:36.246 } 00:07:36.246 [2024-12-16 14:23:28.300682] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:36.246 [2024-12-16 14:23:28.300807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:07:36.505 [2024-12-16 14:23:28.449998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.505 [2024-12-16 14:23:28.473588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.505 [2024-12-16 14:23:28.508354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.505 [2024-12-16 14:23:28.600788] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:36.505 [2024-12-16 14:23:28.600860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.505 [2024-12-16 14:23:28.673118] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.764 00:07:36.764 real 0m0.471s 00:07:36.764 user 0m0.325s 00:07:36.764 sys 0m0.105s 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 END TEST dd_bs_lt_native_bs 00:07:36.764 ************************************ 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.764 ************************************ 00:07:36.764 START TEST dd_rw 00:07:36.764 ************************************ 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:36.764 14:23:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.332 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:37.332 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.332 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.333 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.333 [2024-12-16 14:23:29.382857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:37.333 [2024-12-16 14:23:29.383165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73580 ] 00:07:37.333 { 00:07:37.333 "subsystems": [ 00:07:37.333 { 00:07:37.333 "subsystem": "bdev", 00:07:37.333 "config": [ 00:07:37.333 { 00:07:37.333 "params": { 00:07:37.333 "trtype": "pcie", 00:07:37.333 "traddr": "0000:00:10.0", 00:07:37.333 "name": "Nvme0" 00:07:37.333 }, 00:07:37.333 "method": "bdev_nvme_attach_controller" 00:07:37.333 }, 00:07:37.333 { 00:07:37.333 "method": "bdev_wait_for_examine" 00:07:37.333 } 00:07:37.333 ] 00:07:37.333 } 00:07:37.333 ] 00:07:37.333 } 00:07:37.333 [2024-12-16 14:23:29.530017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.592 [2024-12-16 14:23:29.551418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.592 [2024-12-16 14:23:29.580132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.592  [2024-12-16T14:23:29.792Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:37.592 00:07:37.592 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:37.592 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.592 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.592 14:23:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.851 [2024-12-16 14:23:29.834449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:37.851 [2024-12-16 14:23:29.835007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73594 ] 00:07:37.851 { 00:07:37.851 "subsystems": [ 00:07:37.851 { 00:07:37.851 "subsystem": "bdev", 00:07:37.851 "config": [ 00:07:37.851 { 00:07:37.851 "params": { 00:07:37.851 "trtype": "pcie", 00:07:37.851 "traddr": "0000:00:10.0", 00:07:37.851 "name": "Nvme0" 00:07:37.851 }, 00:07:37.851 "method": "bdev_nvme_attach_controller" 00:07:37.851 }, 00:07:37.851 { 00:07:37.851 "method": "bdev_wait_for_examine" 00:07:37.851 } 00:07:37.851 ] 00:07:37.851 } 00:07:37.851 ] 00:07:37.851 } 00:07:37.851 [2024-12-16 14:23:29.981225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.851 [2024-12-16 14:23:29.998702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.851 [2024-12-16 14:23:30.027917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.110  [2024-12-16T14:23:30.310Z] Copying: 60/60 [kB] (average 14 MBps) 00:07:38.110 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.110 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 [2024-12-16 14:23:30.283510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:38.110 [2024-12-16 14:23:30.283771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73609 ] 00:07:38.110 { 00:07:38.110 "subsystems": [ 00:07:38.110 { 00:07:38.110 "subsystem": "bdev", 00:07:38.110 "config": [ 00:07:38.110 { 00:07:38.110 "params": { 00:07:38.110 "trtype": "pcie", 00:07:38.110 "traddr": "0000:00:10.0", 00:07:38.110 "name": "Nvme0" 00:07:38.110 }, 00:07:38.110 "method": "bdev_nvme_attach_controller" 00:07:38.110 }, 00:07:38.110 { 00:07:38.110 "method": "bdev_wait_for_examine" 00:07:38.110 } 00:07:38.110 ] 00:07:38.110 } 00:07:38.110 ] 00:07:38.110 } 00:07:38.369 [2024-12-16 14:23:30.425511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.369 [2024-12-16 14:23:30.445704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.369 [2024-12-16 14:23:30.476327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.369  [2024-12-16T14:23:30.828Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.628 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:38.628 14:23:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.202 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:39.202 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:39.202 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.202 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.202 [2024-12-16 14:23:31.197634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:39.203 [2024-12-16 14:23:31.197718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73628 ] 00:07:39.203 { 00:07:39.203 "subsystems": [ 00:07:39.203 { 00:07:39.203 "subsystem": "bdev", 00:07:39.203 "config": [ 00:07:39.203 { 00:07:39.203 "params": { 00:07:39.203 "trtype": "pcie", 00:07:39.203 "traddr": "0000:00:10.0", 00:07:39.203 "name": "Nvme0" 00:07:39.203 }, 00:07:39.203 "method": "bdev_nvme_attach_controller" 00:07:39.203 }, 00:07:39.203 { 00:07:39.203 "method": "bdev_wait_for_examine" 00:07:39.203 } 00:07:39.203 ] 00:07:39.203 } 00:07:39.203 ] 00:07:39.203 } 00:07:39.203 [2024-12-16 14:23:31.332483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.203 [2024-12-16 14:23:31.350505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.203 [2024-12-16 14:23:31.377169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.531  [2024-12-16T14:23:31.731Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:39.531 00:07:39.531 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:39.531 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:39.531 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.531 14:23:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.531 [2024-12-16 14:23:31.631723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:39.531 [2024-12-16 14:23:31.631816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73642 ] 00:07:39.531 { 00:07:39.531 "subsystems": [ 00:07:39.531 { 00:07:39.531 "subsystem": "bdev", 00:07:39.531 "config": [ 00:07:39.531 { 00:07:39.531 "params": { 00:07:39.531 "trtype": "pcie", 00:07:39.531 "traddr": "0000:00:10.0", 00:07:39.531 "name": "Nvme0" 00:07:39.531 }, 00:07:39.531 "method": "bdev_nvme_attach_controller" 00:07:39.531 }, 00:07:39.531 { 00:07:39.531 "method": "bdev_wait_for_examine" 00:07:39.531 } 00:07:39.531 ] 00:07:39.531 } 00:07:39.531 ] 00:07:39.531 } 00:07:39.790 [2024-12-16 14:23:31.774724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.790 [2024-12-16 14:23:31.792360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.790 [2024-12-16 14:23:31.819098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.790  [2024-12-16T14:23:32.250Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:40.050 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.050 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.050 { 00:07:40.050 "subsystems": [ 00:07:40.050 { 00:07:40.050 "subsystem": "bdev", 00:07:40.050 "config": [ 00:07:40.050 { 00:07:40.050 "params": { 00:07:40.050 "trtype": "pcie", 00:07:40.050 "traddr": "0000:00:10.0", 00:07:40.050 "name": "Nvme0" 00:07:40.050 }, 00:07:40.050 "method": "bdev_nvme_attach_controller" 00:07:40.050 }, 00:07:40.050 { 00:07:40.050 "method": "bdev_wait_for_examine" 00:07:40.050 } 00:07:40.050 ] 00:07:40.050 } 00:07:40.050 ] 00:07:40.050 } 00:07:40.050 [2024-12-16 14:23:32.081915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.050 [2024-12-16 14:23:32.082222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73657 ] 00:07:40.050 [2024-12-16 14:23:32.227765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.050 [2024-12-16 14:23:32.245862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.309 [2024-12-16 14:23:32.273281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.309  [2024-12-16T14:23:32.509Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.309 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.309 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.876 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:40.876 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:40.876 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.877 14:23:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.877 [2024-12-16 14:23:32.986341] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:40.877 [2024-12-16 14:23:32.987108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73671 ] 00:07:40.877 { 00:07:40.877 "subsystems": [ 00:07:40.877 { 00:07:40.877 "subsystem": "bdev", 00:07:40.877 "config": [ 00:07:40.877 { 00:07:40.877 "params": { 00:07:40.877 "trtype": "pcie", 00:07:40.877 "traddr": "0000:00:10.0", 00:07:40.877 "name": "Nvme0" 00:07:40.877 }, 00:07:40.877 "method": "bdev_nvme_attach_controller" 00:07:40.877 }, 00:07:40.877 { 00:07:40.877 "method": "bdev_wait_for_examine" 00:07:40.877 } 00:07:40.877 ] 00:07:40.877 } 00:07:40.877 ] 00:07:40.877 } 00:07:41.136 [2024-12-16 14:23:33.133192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.136 [2024-12-16 14:23:33.151458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.136 [2024-12-16 14:23:33.179679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.136  [2024-12-16T14:23:33.595Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:41.395 00:07:41.395 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:41.395 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.395 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.395 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.395 [2024-12-16 14:23:33.443538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.395 [2024-12-16 14:23:33.443639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73684 ] 00:07:41.395 { 00:07:41.395 "subsystems": [ 00:07:41.395 { 00:07:41.395 "subsystem": "bdev", 00:07:41.395 "config": [ 00:07:41.395 { 00:07:41.395 "params": { 00:07:41.395 "trtype": "pcie", 00:07:41.395 "traddr": "0000:00:10.0", 00:07:41.395 "name": "Nvme0" 00:07:41.395 }, 00:07:41.395 "method": "bdev_nvme_attach_controller" 00:07:41.395 }, 00:07:41.395 { 00:07:41.395 "method": "bdev_wait_for_examine" 00:07:41.395 } 00:07:41.395 ] 00:07:41.395 } 00:07:41.395 ] 00:07:41.395 } 00:07:41.395 [2024-12-16 14:23:33.585659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.654 [2024-12-16 14:23:33.604761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.655 [2024-12-16 14:23:33.631568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.655  [2024-12-16T14:23:33.855Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:41.655 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.655 14:23:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.913 [2024-12-16 14:23:33.887453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:41.913 [2024-12-16 14:23:33.887706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73700 ] 00:07:41.913 { 00:07:41.913 "subsystems": [ 00:07:41.913 { 00:07:41.913 "subsystem": "bdev", 00:07:41.913 "config": [ 00:07:41.913 { 00:07:41.913 "params": { 00:07:41.913 "trtype": "pcie", 00:07:41.913 "traddr": "0000:00:10.0", 00:07:41.913 "name": "Nvme0" 00:07:41.913 }, 00:07:41.913 "method": "bdev_nvme_attach_controller" 00:07:41.913 }, 00:07:41.913 { 00:07:41.913 "method": "bdev_wait_for_examine" 00:07:41.913 } 00:07:41.913 ] 00:07:41.913 } 00:07:41.913 ] 00:07:41.913 } 00:07:41.913 [2024-12-16 14:23:34.033220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.913 [2024-12-16 14:23:34.054663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.913 [2024-12-16 14:23:34.082586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.172  [2024-12-16T14:23:34.372Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:42.172 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.172 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.740 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:42.740 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.740 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.740 14:23:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.740 [2024-12-16 14:23:34.787881] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:42.740 [2024-12-16 14:23:34.787978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73719 ] 00:07:42.740 { 00:07:42.740 "subsystems": [ 00:07:42.740 { 00:07:42.740 "subsystem": "bdev", 00:07:42.740 "config": [ 00:07:42.740 { 00:07:42.740 "params": { 00:07:42.740 "trtype": "pcie", 00:07:42.740 "traddr": "0000:00:10.0", 00:07:42.740 "name": "Nvme0" 00:07:42.740 }, 00:07:42.740 "method": "bdev_nvme_attach_controller" 00:07:42.740 }, 00:07:42.740 { 00:07:42.740 "method": "bdev_wait_for_examine" 00:07:42.740 } 00:07:42.740 ] 00:07:42.740 } 00:07:42.740 ] 00:07:42.740 } 00:07:42.740 [2024-12-16 14:23:34.931449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.000 [2024-12-16 14:23:34.950402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.000 [2024-12-16 14:23:34.977672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.000  [2024-12-16T14:23:35.200Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.000 00:07:43.000 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:43.000 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.000 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.000 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.259 { 00:07:43.259 "subsystems": [ 00:07:43.259 { 00:07:43.259 "subsystem": "bdev", 00:07:43.259 "config": [ 00:07:43.259 { 00:07:43.259 "params": { 00:07:43.259 "trtype": "pcie", 00:07:43.259 "traddr": "0000:00:10.0", 00:07:43.259 "name": "Nvme0" 00:07:43.259 }, 00:07:43.259 "method": "bdev_nvme_attach_controller" 00:07:43.259 }, 00:07:43.259 { 00:07:43.259 "method": "bdev_wait_for_examine" 00:07:43.259 } 00:07:43.259 ] 00:07:43.259 } 00:07:43.259 ] 00:07:43.259 } 00:07:43.259 [2024-12-16 14:23:35.231275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:43.259 [2024-12-16 14:23:35.231591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73732 ] 00:07:43.259 [2024-12-16 14:23:35.374527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.259 [2024-12-16 14:23:35.392343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.259 [2024-12-16 14:23:35.419258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.518  [2024-12-16T14:23:35.718Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.518 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.518 14:23:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.518 [2024-12-16 14:23:35.673833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:43.518 [2024-12-16 14:23:35.673926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73748 ] 00:07:43.518 { 00:07:43.518 "subsystems": [ 00:07:43.518 { 00:07:43.518 "subsystem": "bdev", 00:07:43.518 "config": [ 00:07:43.518 { 00:07:43.518 "params": { 00:07:43.518 "trtype": "pcie", 00:07:43.518 "traddr": "0000:00:10.0", 00:07:43.518 "name": "Nvme0" 00:07:43.518 }, 00:07:43.518 "method": "bdev_nvme_attach_controller" 00:07:43.518 }, 00:07:43.518 { 00:07:43.518 "method": "bdev_wait_for_examine" 00:07:43.518 } 00:07:43.518 ] 00:07:43.518 } 00:07:43.518 ] 00:07:43.518 } 00:07:43.777 [2024-12-16 14:23:35.820983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.777 [2024-12-16 14:23:35.841823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.777 [2024-12-16 14:23:35.872328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.777  [2024-12-16T14:23:36.235Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.035 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.035 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.294 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:44.294 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:44.294 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.294 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.553 [2024-12-16 14:23:36.510040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:44.553 [2024-12-16 14:23:36.510332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73761 ] 00:07:44.553 { 00:07:44.553 "subsystems": [ 00:07:44.553 { 00:07:44.553 "subsystem": "bdev", 00:07:44.553 "config": [ 00:07:44.553 { 00:07:44.553 "params": { 00:07:44.553 "trtype": "pcie", 00:07:44.553 "traddr": "0000:00:10.0", 00:07:44.553 "name": "Nvme0" 00:07:44.553 }, 00:07:44.553 "method": "bdev_nvme_attach_controller" 00:07:44.553 }, 00:07:44.553 { 00:07:44.553 "method": "bdev_wait_for_examine" 00:07:44.553 } 00:07:44.553 ] 00:07:44.553 } 00:07:44.553 ] 00:07:44.553 } 00:07:44.553 [2024-12-16 14:23:36.655449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.553 [2024-12-16 14:23:36.673976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.553 [2024-12-16 14:23:36.700916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.812  [2024-12-16T14:23:37.012Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:44.812 00:07:44.812 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:44.812 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:44.812 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.812 14:23:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.812 [2024-12-16 14:23:36.948404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:44.812 [2024-12-16 14:23:36.948674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73775 ] 00:07:44.812 { 00:07:44.812 "subsystems": [ 00:07:44.812 { 00:07:44.812 "subsystem": "bdev", 00:07:44.812 "config": [ 00:07:44.812 { 00:07:44.812 "params": { 00:07:44.812 "trtype": "pcie", 00:07:44.812 "traddr": "0000:00:10.0", 00:07:44.812 "name": "Nvme0" 00:07:44.812 }, 00:07:44.812 "method": "bdev_nvme_attach_controller" 00:07:44.812 }, 00:07:44.812 { 00:07:44.812 "method": "bdev_wait_for_examine" 00:07:44.812 } 00:07:44.812 ] 00:07:44.812 } 00:07:44.812 ] 00:07:44.812 } 00:07:45.070 [2024-12-16 14:23:37.092337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.070 [2024-12-16 14:23:37.111029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.070 [2024-12-16 14:23:37.138042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.070  [2024-12-16T14:23:37.528Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:45.328 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.328 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.328 { 00:07:45.328 "subsystems": [ 00:07:45.328 { 00:07:45.328 "subsystem": "bdev", 00:07:45.328 "config": [ 00:07:45.328 { 00:07:45.328 "params": { 00:07:45.328 "trtype": "pcie", 00:07:45.328 "traddr": "0000:00:10.0", 00:07:45.328 "name": "Nvme0" 00:07:45.328 }, 00:07:45.328 "method": "bdev_nvme_attach_controller" 00:07:45.328 }, 00:07:45.328 { 00:07:45.328 "method": "bdev_wait_for_examine" 00:07:45.328 } 00:07:45.328 ] 00:07:45.328 } 00:07:45.328 ] 00:07:45.328 } 00:07:45.328 [2024-12-16 14:23:37.397383] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:45.328 [2024-12-16 14:23:37.397514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73790 ] 00:07:45.587 [2024-12-16 14:23:37.542095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.587 [2024-12-16 14:23:37.560013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.587 [2024-12-16 14:23:37.586741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.587  [2024-12-16T14:23:37.787Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:45.587 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:45.587 14:23:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.155 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:46.156 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:46.156 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.156 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.156 { 00:07:46.156 "subsystems": [ 00:07:46.156 { 00:07:46.156 "subsystem": "bdev", 00:07:46.156 "config": [ 00:07:46.156 { 00:07:46.156 "params": { 00:07:46.156 "trtype": "pcie", 00:07:46.156 "traddr": "0000:00:10.0", 00:07:46.156 "name": "Nvme0" 00:07:46.156 }, 00:07:46.156 "method": "bdev_nvme_attach_controller" 00:07:46.156 }, 00:07:46.156 { 00:07:46.156 "method": "bdev_wait_for_examine" 00:07:46.156 } 00:07:46.156 ] 00:07:46.156 } 00:07:46.156 ] 00:07:46.156 } 00:07:46.156 [2024-12-16 14:23:38.218271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:46.156 [2024-12-16 14:23:38.218567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73804 ] 00:07:46.415 [2024-12-16 14:23:38.364626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.415 [2024-12-16 14:23:38.382356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.415 [2024-12-16 14:23:38.411210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.415  [2024-12-16T14:23:38.615Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.415 00:07:46.674 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:46.674 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.674 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.674 14:23:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.674 [2024-12-16 14:23:38.656719] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:46.674 [2024-12-16 14:23:38.656806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73818 ] 00:07:46.674 { 00:07:46.674 "subsystems": [ 00:07:46.674 { 00:07:46.674 "subsystem": "bdev", 00:07:46.674 "config": [ 00:07:46.674 { 00:07:46.674 "params": { 00:07:46.674 "trtype": "pcie", 00:07:46.674 "traddr": "0000:00:10.0", 00:07:46.674 "name": "Nvme0" 00:07:46.674 }, 00:07:46.674 "method": "bdev_nvme_attach_controller" 00:07:46.674 }, 00:07:46.674 { 00:07:46.674 "method": "bdev_wait_for_examine" 00:07:46.674 } 00:07:46.674 ] 00:07:46.674 } 00:07:46.674 ] 00:07:46.674 } 00:07:46.674 [2024-12-16 14:23:38.793273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.674 [2024-12-16 14:23:38.813307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.674 [2024-12-16 14:23:38.840202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.933  [2024-12-16T14:23:39.133Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.933 00:07:46.933 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.933 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:46.933 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.933 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.933 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.934 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.934 [2024-12-16 14:23:39.095411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:46.934 [2024-12-16 14:23:39.095525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73833 ] 00:07:46.934 { 00:07:46.934 "subsystems": [ 00:07:46.934 { 00:07:46.934 "subsystem": "bdev", 00:07:46.934 "config": [ 00:07:46.934 { 00:07:46.934 "params": { 00:07:46.934 "trtype": "pcie", 00:07:46.934 "traddr": "0000:00:10.0", 00:07:46.934 "name": "Nvme0" 00:07:46.934 }, 00:07:46.934 "method": "bdev_nvme_attach_controller" 00:07:46.934 }, 00:07:46.934 { 00:07:46.934 "method": "bdev_wait_for_examine" 00:07:46.934 } 00:07:46.934 ] 00:07:46.934 } 00:07:46.934 ] 00:07:46.934 } 00:07:47.193 [2024-12-16 14:23:39.239674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.193 [2024-12-16 14:23:39.257707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.193 [2024-12-16 14:23:39.287476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.193  [2024-12-16T14:23:39.653Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:47.453 00:07:47.453 00:07:47.453 real 0m10.710s 00:07:47.453 user 0m7.821s 00:07:47.453 sys 0m3.421s 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.453 ************************************ 00:07:47.453 END TEST dd_rw 00:07:47.453 ************************************ 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.453 ************************************ 00:07:47.453 START TEST dd_rw_offset 00:07:47.453 ************************************ 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:47.453 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:47.454 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=yxfg6mgakkhudtp95j3n710kojxu5q9m33x01gk2r8xtwpp942rrtxfoy2ooqx7lrt89e3qiumfnh18yrpn1pqyu9mzp2yla7auxkb28iw4xeb2zrtrm8p3j1pu7sqojcgvxome6gqrf9m55cxafz31yfkoja9xvp0hacs0yaxzwafntwghiwhszgv4fwf6qyxtsf4isjzuklq9vnzbh9xn301k7ftgee3n68arjs8ddjmvf2nv3vxg7z5ukcvkujusepttav4btll2gugc3n9d6acyt1uu6fub61xtsuddmqrvf60ha8qt95ifpy9pk4gkw5uiq9l67zpcx46a0a36mqv8uh4m012aibjb9resytvz0i0ss2kbnyd767kinxfki5lkv60ntl0p3a0yfr8pxihzfn7bv6qe2un04tir1534iavgvrz42pd9sv51nvtllyxxlm6pd10cllu2a5gedbbxx84mqzfackf9v9anqk62v3s12aociexcbsk06rsq2sqbkh4v7px9p8h7kb0m1843nyhpvwwbkejof7v4dahdvhmxdubcw46fcd6om0zmqwqbhatlbmr4f8ecvxy6ndkpi2vd8m7i5bv2vaoejlad2trjyr8qt6zc8oq4resacmpqd6vky81qb6ssmccq0wtoup1e9zpi0kgq14m9znvhws8jpruptsda4ulv1e9sr3cwxyymx79t4buz05j01azenli4673k4hu3tf227e3sezzjnas0drwss29px4k4bxg332ys6w7i18161jxhu2wrguuhj93l1eh3gj8cheoao4jf83faxphyo8un84v51ydofn3za62h96874vjiiekhzy1iccy03f3e4yq4mrj1lf50b37ld3558uvbwa5f1wlxii5qfkuyg9r93bi08wcbxn1982uiaxldzcthe5w15qgy7zu15ohz334q027ok1v43l3govymdhgy5ednfs6fwa0xt1ti2eodkdyp4lpfoivip6imw4g4imtfxw0k58ofrnqtznu3ighlhebox8ck9xaewiorcrd6bffoakd7az7xts9zv1h3qtuj3o7fshxmf0lid76l6s48xh2uyb6bgn09l1knyba6nwqlnlfwei30iufie8svj28409lzp14vnodmmzudqqnq4hsi7ndbo7cb6gquspppogkbs275wiw9ybt96cfx3merzla66cacls4dki46x2unzmx9dzlw4a1p9l8un4plg2qv9ivleg4jz688act8ni58no88b7buioxg9i4ld2rmh6cnpkzty4au72putm1p9raubxh42mt277ndnckslfvtuwz8vweq1742ai9ecdhoregs1k3apwtfilv7uut7onfogxxj19m1bny8jmms9vu1zbpqelx2a1j3654cciwe888tqfrd9yxqzt6jc0fm782je92pclrvg4h1m9121o9wtrrx047sem8k19ixfyjmngiad0xj9wdvlo0w123sqgg3br85w35yydp9xq864jyywa7yhx501rgg63vm60lhejy4lrlrxu1ol4i5q50j98ee38ztwoj8bvvm9nmlqe7dn5ajogik510ffruk9dc8p9d6fpxoqyhe93zwb6cvb4miw0rvzywwf0aiobs5c9fxsuaqdeigj6kf7ll9fv8kn5ko4dcp9bz21t3g86ibpoq2pg5ck47mvae9t5cemq4ooq5vdgea2mj677othwmdpj5fbds9jletgd6m3x43qgkaz3jn0irpme7xlj6w3hmyoougus36ypv70figx5f50ygatag84xj7bmjefw0g22hhs6h0on1jbhdof9049eu4im3guo1nfzs3alei00r8oq7arqchoqmq890yzi7i1vkc72rj3x4f83pxoz07d7c0ouj37pucz1cts0kybd1439om2en2jwcltmykd4hdtf722a423nwusmyw1u3mlgm31b17ny9l48qpk0t6swq8es81wt3ikx4cey101orsv2g78z7a8gfwemdl5wof6x6airhopqpd9yxfbccki55rmem8zja8aq01sv13wn7mxp5vyzl4d9xs2xghgla9faxkrww8vnognbtcx0js1q1isgl0v564fvyz7msgniif4kvccnx6ro33slkxe8ihdtcge4506dcbtkjyboxea5032etozhdcz8nhiomzjls34g42fjy2nlh2l33pbuci2rsms93bboc3457iasbfc9ticac4pzs70lgp793mq8okj6ghwlbr5wztu28eyi0sbc0z3epcpla97dazixcyvuwjunweg8d7ssk3asusl7qwr6282ztjxxatsv5xpmk9z3az2ei61880pre25bp22x5qcudleqga141af5h1dhtaqmmkjhy8gmaztkdrcfsczwe99i633epccttaocqz889wjdht14x34cl4opv26m98i0c15snlr01zjt8hqa43zp6wwmt5ooltcf7rc00bn37t4np3bwhaaqjr2ozpd6f372bdgatsudzk3ed7cjat1rqv6yz0m16ptyaemg6o0n7rqrwsnw1ehirtsgqy7t70nak0lhu9uaaqjfus8lh3nndjwa0v88axqe5vnh6ak0g10r6canp1frtqavp5ohsrfe31prdijtc6wntid2s04bcs4pns90ks2b9ze8pbcb79vl79zzxltgihma74ugula6kmw0usan48djw5la50fk0vatx8qzhggozy49qq95y5mdwum3cluzbfhlbjq97ctqzrcnmoqbabhqobooc12r2618mxddzad4z3i4a8v4tuvaqr0qnn82eporx1sype52339r420rj7f4hd3jw4z0dg934dun50r91bg3j9x8q831ht8i0sa796lqncr1bdhykrrai0dh6qz056mu80oign2tr8u3rtvz1cue6tnwjzihrvvfkxk8trqttrmlekxvjfs6nbak12r8wrkwwydc3c2ush32tlqtfosqls38rrv59zdak72jvvaye4z0zaxklgutg76qoxa6qhxg95ysoofxjswl7rljc6jpihrj7pvx9hulk25g8jewwp2xvceiam45ybmnn6jpclwwpmnc78t51xpr6uu70go0pug3k508qpd8nixluqakn6dcaq4gudx6bh3ilc6u3mlvbdn5o0arp1bkg2jcx55zhw56ni8qakxreml3frvzdcdngseqfsznxjoh6a58lt2g3oqsp7q94h8hbr9x2gsus8nyrqcbuwn0lblnnrub5d04rdu6zo2tq8q7casjjut6a2vwvp0ovwfu8zz3fo0yxnyitai10eut5kwtaz4o6fiu3xphcuoxw8uoo3mqnjb6n18nnup1a1xv434b9hgkw1utxdnyxlt96moa3brplbjabvzjdse7zvwmne87md4vozcpxih3hzkre31kqwxko8izc7nh4n6apwea7o6zsflngq1lspzd7mak9vralmf2x6ppnvzycuuuf1ptapsyxs97om3vjupxa1uew3pq73s5admhft7b8xd0cea26msy39jz94bp62advkf8nnsy5aw72hx1kowd1650snhiqbbmxic57a4ccnjm45tewlxnv924g6pc21ilf923wq2b1v5ywqot5pev8kyxrtbm8sycpzozmbinomcx3vksoxle2p133q4dkpbuf1qmgk2yji6x0qfxwk0zbtzwwomlbt4aqnlrtrs73rah5gjmpi4v3of269unv9nvz5pfm46t14wttrg6a1kxh1cncei696bkzdssn6wm8huktumod6uxykpzj7wk8c8thmkfkcl9rkaomxhcgvqqfsqdwmf9xlteya609pgltn13scnoj3nabd7dw17ah5golejblhoqwgzyjxshim3yrvyud04nwyb2v4bmwu7k5j5af0kvwj98g1y6hzmvnei3zmi5g2zd5d6qs1lzucfu1gx4ue25gnwm0zj1w2bp3ltrs2lkx77sb5zhl1cb2k4p29s5mheupqeoln434nc6pgcwnk053izibbd91ithh5t3iln67ujteor2w2t 00:07:47.454 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:47.454 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:47.454 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:47.454 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:47.454 [2024-12-16 14:23:39.618076] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:47.454 [2024-12-16 14:23:39.618161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73863 ] 00:07:47.454 { 00:07:47.454 "subsystems": [ 00:07:47.454 { 00:07:47.454 "subsystem": "bdev", 00:07:47.454 "config": [ 00:07:47.454 { 00:07:47.454 "params": { 00:07:47.454 "trtype": "pcie", 00:07:47.454 "traddr": "0000:00:10.0", 00:07:47.454 "name": "Nvme0" 00:07:47.454 }, 00:07:47.454 "method": "bdev_nvme_attach_controller" 00:07:47.454 }, 00:07:47.454 { 00:07:47.454 "method": "bdev_wait_for_examine" 00:07:47.454 } 00:07:47.454 ] 00:07:47.454 } 00:07:47.454 ] 00:07:47.454 } 00:07:47.713 [2024-12-16 14:23:39.754846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.713 [2024-12-16 14:23:39.772825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.713 [2024-12-16 14:23:39.799614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.713  [2024-12-16T14:23:40.172Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:47.972 00:07:47.972 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:47.972 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:47.972 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:47.972 14:23:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:47.972 [2024-12-16 14:23:40.051495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:47.973 [2024-12-16 14:23:40.051592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73877 ] 00:07:47.973 { 00:07:47.973 "subsystems": [ 00:07:47.973 { 00:07:47.973 "subsystem": "bdev", 00:07:47.973 "config": [ 00:07:47.973 { 00:07:47.973 "params": { 00:07:47.973 "trtype": "pcie", 00:07:47.973 "traddr": "0000:00:10.0", 00:07:47.973 "name": "Nvme0" 00:07:47.973 }, 00:07:47.973 "method": "bdev_nvme_attach_controller" 00:07:47.973 }, 00:07:47.973 { 00:07:47.973 "method": "bdev_wait_for_examine" 00:07:47.973 } 00:07:47.973 ] 00:07:47.973 } 00:07:47.973 ] 00:07:47.973 } 00:07:48.232 [2024-12-16 14:23:40.198607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.232 [2024-12-16 14:23:40.216293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.232 [2024-12-16 14:23:40.243004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.232  [2024-12-16T14:23:40.693Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:48.493 00:07:48.493 14:23:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:48.493 ************************************ 00:07:48.493 END TEST dd_rw_offset 00:07:48.493 ************************************ 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ yxfg6mgakkhudtp95j3n710kojxu5q9m33x01gk2r8xtwpp942rrtxfoy2ooqx7lrt89e3qiumfnh18yrpn1pqyu9mzp2yla7auxkb28iw4xeb2zrtrm8p3j1pu7sqojcgvxome6gqrf9m55cxafz31yfkoja9xvp0hacs0yaxzwafntwghiwhszgv4fwf6qyxtsf4isjzuklq9vnzbh9xn301k7ftgee3n68arjs8ddjmvf2nv3vxg7z5ukcvkujusepttav4btll2gugc3n9d6acyt1uu6fub61xtsuddmqrvf60ha8qt95ifpy9pk4gkw5uiq9l67zpcx46a0a36mqv8uh4m012aibjb9resytvz0i0ss2kbnyd767kinxfki5lkv60ntl0p3a0yfr8pxihzfn7bv6qe2un04tir1534iavgvrz42pd9sv51nvtllyxxlm6pd10cllu2a5gedbbxx84mqzfackf9v9anqk62v3s12aociexcbsk06rsq2sqbkh4v7px9p8h7kb0m1843nyhpvwwbkejof7v4dahdvhmxdubcw46fcd6om0zmqwqbhatlbmr4f8ecvxy6ndkpi2vd8m7i5bv2vaoejlad2trjyr8qt6zc8oq4resacmpqd6vky81qb6ssmccq0wtoup1e9zpi0kgq14m9znvhws8jpruptsda4ulv1e9sr3cwxyymx79t4buz05j01azenli4673k4hu3tf227e3sezzjnas0drwss29px4k4bxg332ys6w7i18161jxhu2wrguuhj93l1eh3gj8cheoao4jf83faxphyo8un84v51ydofn3za62h96874vjiiekhzy1iccy03f3e4yq4mrj1lf50b37ld3558uvbwa5f1wlxii5qfkuyg9r93bi08wcbxn1982uiaxldzcthe5w15qgy7zu15ohz334q027ok1v43l3govymdhgy5ednfs6fwa0xt1ti2eodkdyp4lpfoivip6imw4g4imtfxw0k58ofrnqtznu3ighlhebox8ck9xaewiorcrd6bffoakd7az7xts9zv1h3qtuj3o7fshxmf0lid76l6s48xh2uyb6bgn09l1knyba6nwqlnlfwei30iufie8svj28409lzp14vnodmmzudqqnq4hsi7ndbo7cb6gquspppogkbs275wiw9ybt96cfx3merzla66cacls4dki46x2unzmx9dzlw4a1p9l8un4plg2qv9ivleg4jz688act8ni58no88b7buioxg9i4ld2rmh6cnpkzty4au72putm1p9raubxh42mt277ndnckslfvtuwz8vweq1742ai9ecdhoregs1k3apwtfilv7uut7onfogxxj19m1bny8jmms9vu1zbpqelx2a1j3654cciwe888tqfrd9yxqzt6jc0fm782je92pclrvg4h1m9121o9wtrrx047sem8k19ixfyjmngiad0xj9wdvlo0w123sqgg3br85w35yydp9xq864jyywa7yhx501rgg63vm60lhejy4lrlrxu1ol4i5q50j98ee38ztwoj8bvvm9nmlqe7dn5ajogik510ffruk9dc8p9d6fpxoqyhe93zwb6cvb4miw0rvzywwf0aiobs5c9fxsuaqdeigj6kf7ll9fv8kn5ko4dcp9bz21t3g86ibpoq2pg5ck47mvae9t5cemq4ooq5vdgea2mj677othwmdpj5fbds9jletgd6m3x43qgkaz3jn0irpme7xlj6w3hmyoougus36ypv70figx5f50ygatag84xj7bmjefw0g22hhs6h0on1jbhdof9049eu4im3guo1nfzs3alei00r8oq7arqchoqmq890yzi7i1vkc72rj3x4f83pxoz07d7c0ouj37pucz1cts0kybd1439om2en2jwcltmykd4hdtf722a423nwusmyw1u3mlgm31b17ny9l48qpk0t6swq8es81wt3ikx4cey101orsv2g78z7a8gfwemdl5wof6x6airhopqpd9yxfbccki55rmem8zja8aq01sv13wn7mxp5vyzl4d9xs2xghgla9faxkrww8vnognbtcx0js1q1isgl0v564fvyz7msgniif4kvccnx6ro33slkxe8ihdtcge4506dcbtkjyboxea5032etozhdcz8nhiomzjls34g42fjy2nlh2l33pbuci2rsms93bboc3457iasbfc9ticac4pzs70lgp793mq8okj6ghwlbr5wztu28eyi0sbc0z3epcpla97dazixcyvuwjunweg8d7ssk3asusl7qwr6282ztjxxatsv5xpmk9z3az2ei61880pre25bp22x5qcudleqga141af5h1dhtaqmmkjhy8gmaztkdrcfsczwe99i633epccttaocqz889wjdht14x34cl4opv26m98i0c15snlr01zjt8hqa43zp6wwmt5ooltcf7rc00bn37t4np3bwhaaqjr2ozpd6f372bdgatsudzk3ed7cjat1rqv6yz0m16ptyaemg6o0n7rqrwsnw1ehirtsgqy7t70nak0lhu9uaaqjfus8lh3nndjwa0v88axqe5vnh6ak0g10r6canp1frtqavp5ohsrfe31prdijtc6wntid2s04bcs4pns90ks2b9ze8pbcb79vl79zzxltgihma74ugula6kmw0usan48djw5la50fk0vatx8qzhggozy49qq95y5mdwum3cluzbfhlbjq97ctqzrcnmoqbabhqobooc12r2618mxddzad4z3i4a8v4tuvaqr0qnn82eporx1sype52339r420rj7f4hd3jw4z0dg934dun50r91bg3j9x8q831ht8i0sa796lqncr1bdhykrrai0dh6qz056mu80oign2tr8u3rtvz1cue6tnwjzihrvvfkxk8trqttrmlekxvjfs6nbak12r8wrkwwydc3c2ush32tlqtfosqls38rrv59zdak72jvvaye4z0zaxklgutg76qoxa6qhxg95ysoofxjswl7rljc6jpihrj7pvx9hulk25g8jewwp2xvceiam45ybmnn6jpclwwpmnc78t51xpr6uu70go0pug3k508qpd8nixluqakn6dcaq4gudx6bh3ilc6u3mlvbdn5o0arp1bkg2jcx55zhw56ni8qakxreml3frvzdcdngseqfsznxjoh6a58lt2g3oqsp7q94h8hbr9x2gsus8nyrqcbuwn0lblnnrub5d04rdu6zo2tq8q7casjjut6a2vwvp0ovwfu8zz3fo0yxnyitai10eut5kwtaz4o6fiu3xphcuoxw8uoo3mqnjb6n18nnup1a1xv434b9hgkw1utxdnyxlt96moa3brplbjabvzjdse7zvwmne87md4vozcpxih3hzkre31kqwxko8izc7nh4n6apwea7o6zsflngq1lspzd7mak9vralmf2x6ppnvzycuuuf1ptapsyxs97om3vjupxa1uew3pq73s5admhft7b8xd0cea26msy39jz94bp62advkf8nnsy5aw72hx1kowd1650snhiqbbmxic57a4ccnjm45tewlxnv924g6pc21ilf923wq2b1v5ywqot5pev8kyxrtbm8sycpzozmbinomcx3vksoxle2p133q4dkpbuf1qmgk2yji6x0qfxwk0zbtzwwomlbt4aqnlrtrs73rah5gjmpi4v3of269unv9nvz5pfm46t14wttrg6a1kxh1cncei696bkzdssn6wm8huktumod6uxykpzj7wk8c8thmkfkcl9rkaomxhcgvqqfsqdwmf9xlteya609pgltn13scnoj3nabd7dw17ah5golejblhoqwgzyjxshim3yrvyud04nwyb2v4bmwu7k5j5af0kvwj98g1y6hzmvnei3zmi5g2zd5d6qs1lzucfu1gx4ue25gnwm0zj1w2bp3ltrs2lkx77sb5zhl1cb2k4p29s5mheupqeoln434nc6pgcwnk053izibbd91ithh5t3iln67ujteor2w2t == \y\x\f\g\6\m\g\a\k\k\h\u\d\t\p\9\5\j\3\n\7\1\0\k\o\j\x\u\5\q\9\m\3\3\x\0\1\g\k\2\r\8\x\t\w\p\p\9\4\2\r\r\t\x\f\o\y\2\o\o\q\x\7\l\r\t\8\9\e\3\q\i\u\m\f\n\h\1\8\y\r\p\n\1\p\q\y\u\9\m\z\p\2\y\l\a\7\a\u\x\k\b\2\8\i\w\4\x\e\b\2\z\r\t\r\m\8\p\3\j\1\p\u\7\s\q\o\j\c\g\v\x\o\m\e\6\g\q\r\f\9\m\5\5\c\x\a\f\z\3\1\y\f\k\o\j\a\9\x\v\p\0\h\a\c\s\0\y\a\x\z\w\a\f\n\t\w\g\h\i\w\h\s\z\g\v\4\f\w\f\6\q\y\x\t\s\f\4\i\s\j\z\u\k\l\q\9\v\n\z\b\h\9\x\n\3\0\1\k\7\f\t\g\e\e\3\n\6\8\a\r\j\s\8\d\d\j\m\v\f\2\n\v\3\v\x\g\7\z\5\u\k\c\v\k\u\j\u\s\e\p\t\t\a\v\4\b\t\l\l\2\g\u\g\c\3\n\9\d\6\a\c\y\t\1\u\u\6\f\u\b\6\1\x\t\s\u\d\d\m\q\r\v\f\6\0\h\a\8\q\t\9\5\i\f\p\y\9\p\k\4\g\k\w\5\u\i\q\9\l\6\7\z\p\c\x\4\6\a\0\a\3\6\m\q\v\8\u\h\4\m\0\1\2\a\i\b\j\b\9\r\e\s\y\t\v\z\0\i\0\s\s\2\k\b\n\y\d\7\6\7\k\i\n\x\f\k\i\5\l\k\v\6\0\n\t\l\0\p\3\a\0\y\f\r\8\p\x\i\h\z\f\n\7\b\v\6\q\e\2\u\n\0\4\t\i\r\1\5\3\4\i\a\v\g\v\r\z\4\2\p\d\9\s\v\5\1\n\v\t\l\l\y\x\x\l\m\6\p\d\1\0\c\l\l\u\2\a\5\g\e\d\b\b\x\x\8\4\m\q\z\f\a\c\k\f\9\v\9\a\n\q\k\6\2\v\3\s\1\2\a\o\c\i\e\x\c\b\s\k\0\6\r\s\q\2\s\q\b\k\h\4\v\7\p\x\9\p\8\h\7\k\b\0\m\1\8\4\3\n\y\h\p\v\w\w\b\k\e\j\o\f\7\v\4\d\a\h\d\v\h\m\x\d\u\b\c\w\4\6\f\c\d\6\o\m\0\z\m\q\w\q\b\h\a\t\l\b\m\r\4\f\8\e\c\v\x\y\6\n\d\k\p\i\2\v\d\8\m\7\i\5\b\v\2\v\a\o\e\j\l\a\d\2\t\r\j\y\r\8\q\t\6\z\c\8\o\q\4\r\e\s\a\c\m\p\q\d\6\v\k\y\8\1\q\b\6\s\s\m\c\c\q\0\w\t\o\u\p\1\e\9\z\p\i\0\k\g\q\1\4\m\9\z\n\v\h\w\s\8\j\p\r\u\p\t\s\d\a\4\u\l\v\1\e\9\s\r\3\c\w\x\y\y\m\x\7\9\t\4\b\u\z\0\5\j\0\1\a\z\e\n\l\i\4\6\7\3\k\4\h\u\3\t\f\2\2\7\e\3\s\e\z\z\j\n\a\s\0\d\r\w\s\s\2\9\p\x\4\k\4\b\x\g\3\3\2\y\s\6\w\7\i\1\8\1\6\1\j\x\h\u\2\w\r\g\u\u\h\j\9\3\l\1\e\h\3\g\j\8\c\h\e\o\a\o\4\j\f\8\3\f\a\x\p\h\y\o\8\u\n\8\4\v\5\1\y\d\o\f\n\3\z\a\6\2\h\9\6\8\7\4\v\j\i\i\e\k\h\z\y\1\i\c\c\y\0\3\f\3\e\4\y\q\4\m\r\j\1\l\f\5\0\b\3\7\l\d\3\5\5\8\u\v\b\w\a\5\f\1\w\l\x\i\i\5\q\f\k\u\y\g\9\r\9\3\b\i\0\8\w\c\b\x\n\1\9\8\2\u\i\a\x\l\d\z\c\t\h\e\5\w\1\5\q\g\y\7\z\u\1\5\o\h\z\3\3\4\q\0\2\7\o\k\1\v\4\3\l\3\g\o\v\y\m\d\h\g\y\5\e\d\n\f\s\6\f\w\a\0\x\t\1\t\i\2\e\o\d\k\d\y\p\4\l\p\f\o\i\v\i\p\6\i\m\w\4\g\4\i\m\t\f\x\w\0\k\5\8\o\f\r\n\q\t\z\n\u\3\i\g\h\l\h\e\b\o\x\8\c\k\9\x\a\e\w\i\o\r\c\r\d\6\b\f\f\o\a\k\d\7\a\z\7\x\t\s\9\z\v\1\h\3\q\t\u\j\3\o\7\f\s\h\x\m\f\0\l\i\d\7\6\l\6\s\4\8\x\h\2\u\y\b\6\b\g\n\0\9\l\1\k\n\y\b\a\6\n\w\q\l\n\l\f\w\e\i\3\0\i\u\f\i\e\8\s\v\j\2\8\4\0\9\l\z\p\1\4\v\n\o\d\m\m\z\u\d\q\q\n\q\4\h\s\i\7\n\d\b\o\7\c\b\6\g\q\u\s\p\p\p\o\g\k\b\s\2\7\5\w\i\w\9\y\b\t\9\6\c\f\x\3\m\e\r\z\l\a\6\6\c\a\c\l\s\4\d\k\i\4\6\x\2\u\n\z\m\x\9\d\z\l\w\4\a\1\p\9\l\8\u\n\4\p\l\g\2\q\v\9\i\v\l\e\g\4\j\z\6\8\8\a\c\t\8\n\i\5\8\n\o\8\8\b\7\b\u\i\o\x\g\9\i\4\l\d\2\r\m\h\6\c\n\p\k\z\t\y\4\a\u\7\2\p\u\t\m\1\p\9\r\a\u\b\x\h\4\2\m\t\2\7\7\n\d\n\c\k\s\l\f\v\t\u\w\z\8\v\w\e\q\1\7\4\2\a\i\9\e\c\d\h\o\r\e\g\s\1\k\3\a\p\w\t\f\i\l\v\7\u\u\t\7\o\n\f\o\g\x\x\j\1\9\m\1\b\n\y\8\j\m\m\s\9\v\u\1\z\b\p\q\e\l\x\2\a\1\j\3\6\5\4\c\c\i\w\e\8\8\8\t\q\f\r\d\9\y\x\q\z\t\6\j\c\0\f\m\7\8\2\j\e\9\2\p\c\l\r\v\g\4\h\1\m\9\1\2\1\o\9\w\t\r\r\x\0\4\7\s\e\m\8\k\1\9\i\x\f\y\j\m\n\g\i\a\d\0\x\j\9\w\d\v\l\o\0\w\1\2\3\s\q\g\g\3\b\r\8\5\w\3\5\y\y\d\p\9\x\q\8\6\4\j\y\y\w\a\7\y\h\x\5\0\1\r\g\g\6\3\v\m\6\0\l\h\e\j\y\4\l\r\l\r\x\u\1\o\l\4\i\5\q\5\0\j\9\8\e\e\3\8\z\t\w\o\j\8\b\v\v\m\9\n\m\l\q\e\7\d\n\5\a\j\o\g\i\k\5\1\0\f\f\r\u\k\9\d\c\8\p\9\d\6\f\p\x\o\q\y\h\e\9\3\z\w\b\6\c\v\b\4\m\i\w\0\r\v\z\y\w\w\f\0\a\i\o\b\s\5\c\9\f\x\s\u\a\q\d\e\i\g\j\6\k\f\7\l\l\9\f\v\8\k\n\5\k\o\4\d\c\p\9\b\z\2\1\t\3\g\8\6\i\b\p\o\q\2\p\g\5\c\k\4\7\m\v\a\e\9\t\5\c\e\m\q\4\o\o\q\5\v\d\g\e\a\2\m\j\6\7\7\o\t\h\w\m\d\p\j\5\f\b\d\s\9\j\l\e\t\g\d\6\m\3\x\4\3\q\g\k\a\z\3\j\n\0\i\r\p\m\e\7\x\l\j\6\w\3\h\m\y\o\o\u\g\u\s\3\6\y\p\v\7\0\f\i\g\x\5\f\5\0\y\g\a\t\a\g\8\4\x\j\7\b\m\j\e\f\w\0\g\2\2\h\h\s\6\h\0\o\n\1\j\b\h\d\o\f\9\0\4\9\e\u\4\i\m\3\g\u\o\1\n\f\z\s\3\a\l\e\i\0\0\r\8\o\q\7\a\r\q\c\h\o\q\m\q\8\9\0\y\z\i\7\i\1\v\k\c\7\2\r\j\3\x\4\f\8\3\p\x\o\z\0\7\d\7\c\0\o\u\j\3\7\p\u\c\z\1\c\t\s\0\k\y\b\d\1\4\3\9\o\m\2\e\n\2\j\w\c\l\t\m\y\k\d\4\h\d\t\f\7\2\2\a\4\2\3\n\w\u\s\m\y\w\1\u\3\m\l\g\m\3\1\b\1\7\n\y\9\l\4\8\q\p\k\0\t\6\s\w\q\8\e\s\8\1\w\t\3\i\k\x\4\c\e\y\1\0\1\o\r\s\v\2\g\7\8\z\7\a\8\g\f\w\e\m\d\l\5\w\o\f\6\x\6\a\i\r\h\o\p\q\p\d\9\y\x\f\b\c\c\k\i\5\5\r\m\e\m\8\z\j\a\8\a\q\0\1\s\v\1\3\w\n\7\m\x\p\5\v\y\z\l\4\d\9\x\s\2\x\g\h\g\l\a\9\f\a\x\k\r\w\w\8\v\n\o\g\n\b\t\c\x\0\j\s\1\q\1\i\s\g\l\0\v\5\6\4\f\v\y\z\7\m\s\g\n\i\i\f\4\k\v\c\c\n\x\6\r\o\3\3\s\l\k\x\e\8\i\h\d\t\c\g\e\4\5\0\6\d\c\b\t\k\j\y\b\o\x\e\a\5\0\3\2\e\t\o\z\h\d\c\z\8\n\h\i\o\m\z\j\l\s\3\4\g\4\2\f\j\y\2\n\l\h\2\l\3\3\p\b\u\c\i\2\r\s\m\s\9\3\b\b\o\c\3\4\5\7\i\a\s\b\f\c\9\t\i\c\a\c\4\p\z\s\7\0\l\g\p\7\9\3\m\q\8\o\k\j\6\g\h\w\l\b\r\5\w\z\t\u\2\8\e\y\i\0\s\b\c\0\z\3\e\p\c\p\l\a\9\7\d\a\z\i\x\c\y\v\u\w\j\u\n\w\e\g\8\d\7\s\s\k\3\a\s\u\s\l\7\q\w\r\6\2\8\2\z\t\j\x\x\a\t\s\v\5\x\p\m\k\9\z\3\a\z\2\e\i\6\1\8\8\0\p\r\e\2\5\b\p\2\2\x\5\q\c\u\d\l\e\q\g\a\1\4\1\a\f\5\h\1\d\h\t\a\q\m\m\k\j\h\y\8\g\m\a\z\t\k\d\r\c\f\s\c\z\w\e\9\9\i\6\3\3\e\p\c\c\t\t\a\o\c\q\z\8\8\9\w\j\d\h\t\1\4\x\3\4\c\l\4\o\p\v\2\6\m\9\8\i\0\c\1\5\s\n\l\r\0\1\z\j\t\8\h\q\a\4\3\z\p\6\w\w\m\t\5\o\o\l\t\c\f\7\r\c\0\0\b\n\3\7\t\4\n\p\3\b\w\h\a\a\q\j\r\2\o\z\p\d\6\f\3\7\2\b\d\g\a\t\s\u\d\z\k\3\e\d\7\c\j\a\t\1\r\q\v\6\y\z\0\m\1\6\p\t\y\a\e\m\g\6\o\0\n\7\r\q\r\w\s\n\w\1\e\h\i\r\t\s\g\q\y\7\t\7\0\n\a\k\0\l\h\u\9\u\a\a\q\j\f\u\s\8\l\h\3\n\n\d\j\w\a\0\v\8\8\a\x\q\e\5\v\n\h\6\a\k\0\g\1\0\r\6\c\a\n\p\1\f\r\t\q\a\v\p\5\o\h\s\r\f\e\3\1\p\r\d\i\j\t\c\6\w\n\t\i\d\2\s\0\4\b\c\s\4\p\n\s\9\0\k\s\2\b\9\z\e\8\p\b\c\b\7\9\v\l\7\9\z\z\x\l\t\g\i\h\m\a\7\4\u\g\u\l\a\6\k\m\w\0\u\s\a\n\4\8\d\j\w\5\l\a\5\0\f\k\0\v\a\t\x\8\q\z\h\g\g\o\z\y\4\9\q\q\9\5\y\5\m\d\w\u\m\3\c\l\u\z\b\f\h\l\b\j\q\9\7\c\t\q\z\r\c\n\m\o\q\b\a\b\h\q\o\b\o\o\c\1\2\r\2\6\1\8\m\x\d\d\z\a\d\4\z\3\i\4\a\8\v\4\t\u\v\a\q\r\0\q\n\n\8\2\e\p\o\r\x\1\s\y\p\e\5\2\3\3\9\r\4\2\0\r\j\7\f\4\h\d\3\j\w\4\z\0\d\g\9\3\4\d\u\n\5\0\r\9\1\b\g\3\j\9\x\8\q\8\3\1\h\t\8\i\0\s\a\7\9\6\l\q\n\c\r\1\b\d\h\y\k\r\r\a\i\0\d\h\6\q\z\0\5\6\m\u\8\0\o\i\g\n\2\t\r\8\u\3\r\t\v\z\1\c\u\e\6\t\n\w\j\z\i\h\r\v\v\f\k\x\k\8\t\r\q\t\t\r\m\l\e\k\x\v\j\f\s\6\n\b\a\k\1\2\r\8\w\r\k\w\w\y\d\c\3\c\2\u\s\h\3\2\t\l\q\t\f\o\s\q\l\s\3\8\r\r\v\5\9\z\d\a\k\7\2\j\v\v\a\y\e\4\z\0\z\a\x\k\l\g\u\t\g\7\6\q\o\x\a\6\q\h\x\g\9\5\y\s\o\o\f\x\j\s\w\l\7\r\l\j\c\6\j\p\i\h\r\j\7\p\v\x\9\h\u\l\k\2\5\g\8\j\e\w\w\p\2\x\v\c\e\i\a\m\4\5\y\b\m\n\n\6\j\p\c\l\w\w\p\m\n\c\7\8\t\5\1\x\p\r\6\u\u\7\0\g\o\0\p\u\g\3\k\5\0\8\q\p\d\8\n\i\x\l\u\q\a\k\n\6\d\c\a\q\4\g\u\d\x\6\b\h\3\i\l\c\6\u\3\m\l\v\b\d\n\5\o\0\a\r\p\1\b\k\g\2\j\c\x\5\5\z\h\w\5\6\n\i\8\q\a\k\x\r\e\m\l\3\f\r\v\z\d\c\d\n\g\s\e\q\f\s\z\n\x\j\o\h\6\a\5\8\l\t\2\g\3\o\q\s\p\7\q\9\4\h\8\h\b\r\9\x\2\g\s\u\s\8\n\y\r\q\c\b\u\w\n\0\l\b\l\n\n\r\u\b\5\d\0\4\r\d\u\6\z\o\2\t\q\8\q\7\c\a\s\j\j\u\t\6\a\2\v\w\v\p\0\o\v\w\f\u\8\z\z\3\f\o\0\y\x\n\y\i\t\a\i\1\0\e\u\t\5\k\w\t\a\z\4\o\6\f\i\u\3\x\p\h\c\u\o\x\w\8\u\o\o\3\m\q\n\j\b\6\n\1\8\n\n\u\p\1\a\1\x\v\4\3\4\b\9\h\g\k\w\1\u\t\x\d\n\y\x\l\t\9\6\m\o\a\3\b\r\p\l\b\j\a\b\v\z\j\d\s\e\7\z\v\w\m\n\e\8\7\m\d\4\v\o\z\c\p\x\i\h\3\h\z\k\r\e\3\1\k\q\w\x\k\o\8\i\z\c\7\n\h\4\n\6\a\p\w\e\a\7\o\6\z\s\f\l\n\g\q\1\l\s\p\z\d\7\m\a\k\9\v\r\a\l\m\f\2\x\6\p\p\n\v\z\y\c\u\u\u\f\1\p\t\a\p\s\y\x\s\9\7\o\m\3\v\j\u\p\x\a\1\u\e\w\3\p\q\7\3\s\5\a\d\m\h\f\t\7\b\8\x\d\0\c\e\a\2\6\m\s\y\3\9\j\z\9\4\b\p\6\2\a\d\v\k\f\8\n\n\s\y\5\a\w\7\2\h\x\1\k\o\w\d\1\6\5\0\s\n\h\i\q\b\b\m\x\i\c\5\7\a\4\c\c\n\j\m\4\5\t\e\w\l\x\n\v\9\2\4\g\6\p\c\2\1\i\l\f\9\2\3\w\q\2\b\1\v\5\y\w\q\o\t\5\p\e\v\8\k\y\x\r\t\b\m\8\s\y\c\p\z\o\z\m\b\i\n\o\m\c\x\3\v\k\s\o\x\l\e\2\p\1\3\3\q\4\d\k\p\b\u\f\1\q\m\g\k\2\y\j\i\6\x\0\q\f\x\w\k\0\z\b\t\z\w\w\o\m\l\b\t\4\a\q\n\l\r\t\r\s\7\3\r\a\h\5\g\j\m\p\i\4\v\3\o\f\2\6\9\u\n\v\9\n\v\z\5\p\f\m\4\6\t\1\4\w\t\t\r\g\6\a\1\k\x\h\1\c\n\c\e\i\6\9\6\b\k\z\d\s\s\n\6\w\m\8\h\u\k\t\u\m\o\d\6\u\x\y\k\p\z\j\7\w\k\8\c\8\t\h\m\k\f\k\c\l\9\r\k\a\o\m\x\h\c\g\v\q\q\f\s\q\d\w\m\f\9\x\l\t\e\y\a\6\0\9\p\g\l\t\n\1\3\s\c\n\o\j\3\n\a\b\d\7\d\w\1\7\a\h\5\g\o\l\e\j\b\l\h\o\q\w\g\z\y\j\x\s\h\i\m\3\y\r\v\y\u\d\0\4\n\w\y\b\2\v\4\b\m\w\u\7\k\5\j\5\a\f\0\k\v\w\j\9\8\g\1\y\6\h\z\m\v\n\e\i\3\z\m\i\5\g\2\z\d\5\d\6\q\s\1\l\z\u\c\f\u\1\g\x\4\u\e\2\5\g\n\w\m\0\z\j\1\w\2\b\p\3\l\t\r\s\2\l\k\x\7\7\s\b\5\z\h\l\1\c\b\2\k\4\p\2\9\s\5\m\h\e\u\p\q\e\o\l\n\4\3\4\n\c\6\p\g\c\w\n\k\0\5\3\i\z\i\b\b\d\9\1\i\t\h\h\5\t\3\i\l\n\6\7\u\j\t\e\o\r\2\w\2\t ]] 00:07:48.494 00:07:48.494 real 0m0.901s 00:07:48.494 user 0m0.613s 00:07:48.494 sys 0m0.359s 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.494 14:23:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.494 [2024-12-16 14:23:40.533253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:48.494 [2024-12-16 14:23:40.533934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73906 ] 00:07:48.494 { 00:07:48.494 "subsystems": [ 00:07:48.494 { 00:07:48.494 "subsystem": "bdev", 00:07:48.494 "config": [ 00:07:48.494 { 00:07:48.494 "params": { 00:07:48.494 "trtype": "pcie", 00:07:48.494 "traddr": "0000:00:10.0", 00:07:48.494 "name": "Nvme0" 00:07:48.494 }, 00:07:48.494 "method": "bdev_nvme_attach_controller" 00:07:48.494 }, 00:07:48.494 { 00:07:48.494 "method": "bdev_wait_for_examine" 00:07:48.494 } 00:07:48.494 ] 00:07:48.494 } 00:07:48.494 ] 00:07:48.494 } 00:07:48.494 [2024-12-16 14:23:40.682541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.753 [2024-12-16 14:23:40.702637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.753 [2024-12-16 14:23:40.729632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.753  [2024-12-16T14:23:40.953Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.753 00:07:48.753 14:23:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.753 ************************************ 00:07:48.753 END TEST spdk_dd_basic_rw 00:07:48.753 ************************************ 00:07:48.753 00:07:48.753 real 0m13.101s 00:07:48.753 user 0m9.306s 00:07:48.753 sys 0m4.278s 00:07:48.753 14:23:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.753 14:23:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.013 14:23:40 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:49.013 14:23:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.013 14:23:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.013 14:23:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.013 ************************************ 00:07:49.013 START TEST spdk_dd_posix 00:07:49.013 ************************************ 00:07:49.013 14:23:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:49.013 * Looking for test storage... 00:07:49.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:49.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.013 --rc genhtml_branch_coverage=1 00:07:49.013 --rc genhtml_function_coverage=1 00:07:49.013 --rc genhtml_legend=1 00:07:49.013 --rc geninfo_all_blocks=1 00:07:49.013 --rc geninfo_unexecuted_blocks=1 00:07:49.013 00:07:49.013 ' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:49.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.013 --rc genhtml_branch_coverage=1 00:07:49.013 --rc genhtml_function_coverage=1 00:07:49.013 --rc genhtml_legend=1 00:07:49.013 --rc geninfo_all_blocks=1 00:07:49.013 --rc geninfo_unexecuted_blocks=1 00:07:49.013 00:07:49.013 ' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:49.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.013 --rc genhtml_branch_coverage=1 00:07:49.013 --rc genhtml_function_coverage=1 00:07:49.013 --rc genhtml_legend=1 00:07:49.013 --rc geninfo_all_blocks=1 00:07:49.013 --rc geninfo_unexecuted_blocks=1 00:07:49.013 00:07:49.013 ' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:49.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.013 --rc genhtml_branch_coverage=1 00:07:49.013 --rc genhtml_function_coverage=1 00:07:49.013 --rc genhtml_legend=1 00:07:49.013 --rc geninfo_all_blocks=1 00:07:49.013 --rc geninfo_unexecuted_blocks=1 00:07:49.013 00:07:49.013 ' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:49.013 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:49.013 * First test run, liburing in use 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.014 ************************************ 00:07:49.014 START TEST dd_flag_append 00:07:49.014 ************************************ 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=nm3pgegf4kqmojuyi5ikroi8r5fzi55o 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=uvcce3q5cm2z8ldm53da9d6ttm4e900a 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s nm3pgegf4kqmojuyi5ikroi8r5fzi55o 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s uvcce3q5cm2z8ldm53da9d6ttm4e900a 00:07:49.014 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:49.273 [2024-12-16 14:23:41.257448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:49.273 [2024-12-16 14:23:41.257720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73973 ] 00:07:49.273 [2024-12-16 14:23:41.403632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.273 [2024-12-16 14:23:41.423153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.273 [2024-12-16 14:23:41.449684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.273  [2024-12-16T14:23:41.732Z] Copying: 32/32 [B] (average 31 kBps) 00:07:49.532 00:07:49.532 ************************************ 00:07:49.532 END TEST dd_flag_append 00:07:49.532 ************************************ 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ uvcce3q5cm2z8ldm53da9d6ttm4e900anm3pgegf4kqmojuyi5ikroi8r5fzi55o == \u\v\c\c\e\3\q\5\c\m\2\z\8\l\d\m\5\3\d\a\9\d\6\t\t\m\4\e\9\0\0\a\n\m\3\p\g\e\g\f\4\k\q\m\o\j\u\y\i\5\i\k\r\o\i\8\r\5\f\z\i\5\5\o ]] 00:07:49.532 00:07:49.532 real 0m0.371s 00:07:49.532 user 0m0.179s 00:07:49.532 sys 0m0.158s 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.532 ************************************ 00:07:49.532 START TEST dd_flag_directory 00:07:49.532 ************************************ 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.532 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.532 [2024-12-16 14:23:41.678687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:49.532 [2024-12-16 14:23:41.678778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74001 ] 00:07:49.792 [2024-12-16 14:23:41.823484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.792 [2024-12-16 14:23:41.841122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.792 [2024-12-16 14:23:41.867599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.792 [2024-12-16 14:23:41.882749] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:49.792 [2024-12-16 14:23:41.882799] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:49.792 [2024-12-16 14:23:41.882828] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.792 [2024-12-16 14:23:41.941835] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.051 14:23:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:50.051 [2024-12-16 14:23:42.047968] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.051 [2024-12-16 14:23:42.048211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74011 ] 00:07:50.051 [2024-12-16 14:23:42.192848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.051 [2024-12-16 14:23:42.210290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.051 [2024-12-16 14:23:42.236543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.311 [2024-12-16 14:23:42.252520] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.311 [2024-12-16 14:23:42.252593] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.311 [2024-12-16 14:23:42.252622] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.311 [2024-12-16 14:23:42.309853] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:50.311 ************************************ 00:07:50.311 END TEST dd_flag_directory 00:07:50.311 ************************************ 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.311 00:07:50.311 real 0m0.739s 00:07:50.311 user 0m0.354s 00:07:50.311 sys 0m0.177s 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 ************************************ 00:07:50.311 START TEST dd_flag_nofollow 00:07:50.311 ************************************ 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.311 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.311 [2024-12-16 14:23:42.475903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.311 [2024-12-16 14:23:42.475999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74038 ] 00:07:50.571 [2024-12-16 14:23:42.619567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.571 [2024-12-16 14:23:42.637291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.571 [2024-12-16 14:23:42.663394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.571 [2024-12-16 14:23:42.678390] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:50.571 [2024-12-16 14:23:42.678461] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:50.571 [2024-12-16 14:23:42.678491] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.571 [2024-12-16 14:23:42.739180] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.831 14:23:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:50.831 [2024-12-16 14:23:42.837783] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.831 [2024-12-16 14:23:42.837877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:07:50.831 [2024-12-16 14:23:42.980570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.831 [2024-12-16 14:23:42.997938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.831 [2024-12-16 14:23:43.024003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.090 [2024-12-16 14:23:43.040151] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:51.090 [2024-12-16 14:23:43.040213] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:51.090 [2024-12-16 14:23:43.040241] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.090 [2024-12-16 14:23:43.096600] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:51.090 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:51.091 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.091 [2024-12-16 14:23:43.206080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:51.091 [2024-12-16 14:23:43.206195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74051 ] 00:07:51.350 [2024-12-16 14:23:43.350534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.350 [2024-12-16 14:23:43.368979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.350 [2024-12-16 14:23:43.395218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.350  [2024-12-16T14:23:43.550Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.350 00:07:51.350 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ aelgemlod5tkpgfw26v70gyhwe3g0vq0ltfjtpnrq5cn5wweem7xwugxjhhah9isk1nzztis7651huqr43icwh1wmj90p42o7ht9967szmmuij1dy45wm31flfwrvia00ieq173khjhk9had4uyhqemgg9xhwn280af54zv3s5cxii751sp5yq27p1s98s52zr6hj0oh95x2b0pqt47wk74rw0xh13a5z0agevoact24kt624drqx9f4miairfzrfmwj6gqvcaeac26xsc0wa7d7hhl6ywdsby1rx5tj4nwk3ir5w246qrcmuyhoczeuxpk21trssie4pg0wmn5fsxltsulgbsj7p5x916ef9u96vczzvh0cqcx5xtjytijr5xv8yxnworn4uwa0wwztezxwp60idl7aafyq6nmdy980dmcls4mqywqc3pk9bnh4yb6i23s8oux9a0k1cb84x9eph3gl80gupjh2aj6ideg2p7ibszvuy0mretfy0v01 == \a\e\l\g\e\m\l\o\d\5\t\k\p\g\f\w\2\6\v\7\0\g\y\h\w\e\3\g\0\v\q\0\l\t\f\j\t\p\n\r\q\5\c\n\5\w\w\e\e\m\7\x\w\u\g\x\j\h\h\a\h\9\i\s\k\1\n\z\z\t\i\s\7\6\5\1\h\u\q\r\4\3\i\c\w\h\1\w\m\j\9\0\p\4\2\o\7\h\t\9\9\6\7\s\z\m\m\u\i\j\1\d\y\4\5\w\m\3\1\f\l\f\w\r\v\i\a\0\0\i\e\q\1\7\3\k\h\j\h\k\9\h\a\d\4\u\y\h\q\e\m\g\g\9\x\h\w\n\2\8\0\a\f\5\4\z\v\3\s\5\c\x\i\i\7\5\1\s\p\5\y\q\2\7\p\1\s\9\8\s\5\2\z\r\6\h\j\0\o\h\9\5\x\2\b\0\p\q\t\4\7\w\k\7\4\r\w\0\x\h\1\3\a\5\z\0\a\g\e\v\o\a\c\t\2\4\k\t\6\2\4\d\r\q\x\9\f\4\m\i\a\i\r\f\z\r\f\m\w\j\6\g\q\v\c\a\e\a\c\2\6\x\s\c\0\w\a\7\d\7\h\h\l\6\y\w\d\s\b\y\1\r\x\5\t\j\4\n\w\k\3\i\r\5\w\2\4\6\q\r\c\m\u\y\h\o\c\z\e\u\x\p\k\2\1\t\r\s\s\i\e\4\p\g\0\w\m\n\5\f\s\x\l\t\s\u\l\g\b\s\j\7\p\5\x\9\1\6\e\f\9\u\9\6\v\c\z\z\v\h\0\c\q\c\x\5\x\t\j\y\t\i\j\r\5\x\v\8\y\x\n\w\o\r\n\4\u\w\a\0\w\w\z\t\e\z\x\w\p\6\0\i\d\l\7\a\a\f\y\q\6\n\m\d\y\9\8\0\d\m\c\l\s\4\m\q\y\w\q\c\3\p\k\9\b\n\h\4\y\b\6\i\2\3\s\8\o\u\x\9\a\0\k\1\c\b\8\4\x\9\e\p\h\3\g\l\8\0\g\u\p\j\h\2\a\j\6\i\d\e\g\2\p\7\i\b\s\z\v\u\y\0\m\r\e\t\f\y\0\v\0\1 ]] 00:07:51.350 00:07:51.350 real 0m1.110s 00:07:51.350 user 0m0.522s 00:07:51.350 sys 0m0.354s 00:07:51.350 ************************************ 00:07:51.350 END TEST dd_flag_nofollow 00:07:51.350 ************************************ 00:07:51.350 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.350 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.608 ************************************ 00:07:51.608 START TEST dd_flag_noatime 00:07:51.608 ************************************ 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1734359023 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1734359023 00:07:51.608 14:23:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:52.545 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.545 [2024-12-16 14:23:44.649453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:52.545 [2024-12-16 14:23:44.649560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74093 ] 00:07:52.804 [2024-12-16 14:23:44.794293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.804 [2024-12-16 14:23:44.817533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.804 [2024-12-16 14:23:44.849571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.804  [2024-12-16T14:23:45.004Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.804 00:07:52.804 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.804 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1734359023 )) 00:07:52.804 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.804 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1734359023 )) 00:07:52.804 14:23:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.064 [2024-12-16 14:23:45.040646] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.064 [2024-12-16 14:23:45.040749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74107 ] 00:07:53.064 [2024-12-16 14:23:45.186828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.064 [2024-12-16 14:23:45.203979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.064 [2024-12-16 14:23:45.229574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.064  [2024-12-16T14:23:45.522Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.322 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1734359025 )) 00:07:53.322 00:07:53.322 real 0m1.786s 00:07:53.322 user 0m0.373s 00:07:53.322 sys 0m0.361s 00:07:53.322 ************************************ 00:07:53.322 END TEST dd_flag_noatime 00:07:53.322 ************************************ 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.322 14:23:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:53.323 ************************************ 00:07:53.323 START TEST dd_flags_misc 00:07:53.323 ************************************ 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.323 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:53.323 [2024-12-16 14:23:45.474683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.323 [2024-12-16 14:23:45.474786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74130 ] 00:07:53.582 [2024-12-16 14:23:45.619740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.582 [2024-12-16 14:23:45.637226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.582 [2024-12-16 14:23:45.663157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.582  [2024-12-16T14:23:45.782Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.582 00:07:53.841 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zpo687vgkxhor1z2snlqgg2zc90m6mqnoadq8f9j2lnkvew75aqzxz76s8x8183r4sws6e3kqh1243r6sb8oj1knqdnklcx1njpavvnnva4sd81vy9d5ba7gjxsnzeq7bwmpgkj129zey494woetw4slc7v2cvpi3zwo7wf5knp7pm2yaonfl3bkplfmqghhclt2j33emh2gy1nrh06fqyw7wrijacazrqy9b25l9u0u4mjsnr6oksk4q8r5bga504vqaoz37vkc9kow9pknpcpnmb8riw265bbtmil9bobtwgsqrzs0d9z14dl7vrzgpmhb1v7trqj6dt3ob2u2rq3haw3hchdbe4kefi8ek35bmvdunsowjq9y3rsf4e7nze2wo0bk2st66xhq5hq4iewp7m5fnj78sayorb0mdwqjolqqckxueta85c95ia66etsotbz9rdd7q5wok4imeto96rmsdnolf5ve6d7lfzkmrtlrgo072viirti0l4k9 == \z\p\o\6\8\7\v\g\k\x\h\o\r\1\z\2\s\n\l\q\g\g\2\z\c\9\0\m\6\m\q\n\o\a\d\q\8\f\9\j\2\l\n\k\v\e\w\7\5\a\q\z\x\z\7\6\s\8\x\8\1\8\3\r\4\s\w\s\6\e\3\k\q\h\1\2\4\3\r\6\s\b\8\o\j\1\k\n\q\d\n\k\l\c\x\1\n\j\p\a\v\v\n\n\v\a\4\s\d\8\1\v\y\9\d\5\b\a\7\g\j\x\s\n\z\e\q\7\b\w\m\p\g\k\j\1\2\9\z\e\y\4\9\4\w\o\e\t\w\4\s\l\c\7\v\2\c\v\p\i\3\z\w\o\7\w\f\5\k\n\p\7\p\m\2\y\a\o\n\f\l\3\b\k\p\l\f\m\q\g\h\h\c\l\t\2\j\3\3\e\m\h\2\g\y\1\n\r\h\0\6\f\q\y\w\7\w\r\i\j\a\c\a\z\r\q\y\9\b\2\5\l\9\u\0\u\4\m\j\s\n\r\6\o\k\s\k\4\q\8\r\5\b\g\a\5\0\4\v\q\a\o\z\3\7\v\k\c\9\k\o\w\9\p\k\n\p\c\p\n\m\b\8\r\i\w\2\6\5\b\b\t\m\i\l\9\b\o\b\t\w\g\s\q\r\z\s\0\d\9\z\1\4\d\l\7\v\r\z\g\p\m\h\b\1\v\7\t\r\q\j\6\d\t\3\o\b\2\u\2\r\q\3\h\a\w\3\h\c\h\d\b\e\4\k\e\f\i\8\e\k\3\5\b\m\v\d\u\n\s\o\w\j\q\9\y\3\r\s\f\4\e\7\n\z\e\2\w\o\0\b\k\2\s\t\6\6\x\h\q\5\h\q\4\i\e\w\p\7\m\5\f\n\j\7\8\s\a\y\o\r\b\0\m\d\w\q\j\o\l\q\q\c\k\x\u\e\t\a\8\5\c\9\5\i\a\6\6\e\t\s\o\t\b\z\9\r\d\d\7\q\5\w\o\k\4\i\m\e\t\o\9\6\r\m\s\d\n\o\l\f\5\v\e\6\d\7\l\f\z\k\m\r\t\l\r\g\o\0\7\2\v\i\i\r\t\i\0\l\4\k\9 ]] 00:07:53.841 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.841 14:23:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:53.841 [2024-12-16 14:23:45.822415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:53.841 [2024-12-16 14:23:45.822552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74140 ] 00:07:53.841 [2024-12-16 14:23:45.952925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.841 [2024-12-16 14:23:45.970904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.841 [2024-12-16 14:23:45.997108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.841  [2024-12-16T14:23:46.301Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.101 00:07:54.101 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zpo687vgkxhor1z2snlqgg2zc90m6mqnoadq8f9j2lnkvew75aqzxz76s8x8183r4sws6e3kqh1243r6sb8oj1knqdnklcx1njpavvnnva4sd81vy9d5ba7gjxsnzeq7bwmpgkj129zey494woetw4slc7v2cvpi3zwo7wf5knp7pm2yaonfl3bkplfmqghhclt2j33emh2gy1nrh06fqyw7wrijacazrqy9b25l9u0u4mjsnr6oksk4q8r5bga504vqaoz37vkc9kow9pknpcpnmb8riw265bbtmil9bobtwgsqrzs0d9z14dl7vrzgpmhb1v7trqj6dt3ob2u2rq3haw3hchdbe4kefi8ek35bmvdunsowjq9y3rsf4e7nze2wo0bk2st66xhq5hq4iewp7m5fnj78sayorb0mdwqjolqqckxueta85c95ia66etsotbz9rdd7q5wok4imeto96rmsdnolf5ve6d7lfzkmrtlrgo072viirti0l4k9 == \z\p\o\6\8\7\v\g\k\x\h\o\r\1\z\2\s\n\l\q\g\g\2\z\c\9\0\m\6\m\q\n\o\a\d\q\8\f\9\j\2\l\n\k\v\e\w\7\5\a\q\z\x\z\7\6\s\8\x\8\1\8\3\r\4\s\w\s\6\e\3\k\q\h\1\2\4\3\r\6\s\b\8\o\j\1\k\n\q\d\n\k\l\c\x\1\n\j\p\a\v\v\n\n\v\a\4\s\d\8\1\v\y\9\d\5\b\a\7\g\j\x\s\n\z\e\q\7\b\w\m\p\g\k\j\1\2\9\z\e\y\4\9\4\w\o\e\t\w\4\s\l\c\7\v\2\c\v\p\i\3\z\w\o\7\w\f\5\k\n\p\7\p\m\2\y\a\o\n\f\l\3\b\k\p\l\f\m\q\g\h\h\c\l\t\2\j\3\3\e\m\h\2\g\y\1\n\r\h\0\6\f\q\y\w\7\w\r\i\j\a\c\a\z\r\q\y\9\b\2\5\l\9\u\0\u\4\m\j\s\n\r\6\o\k\s\k\4\q\8\r\5\b\g\a\5\0\4\v\q\a\o\z\3\7\v\k\c\9\k\o\w\9\p\k\n\p\c\p\n\m\b\8\r\i\w\2\6\5\b\b\t\m\i\l\9\b\o\b\t\w\g\s\q\r\z\s\0\d\9\z\1\4\d\l\7\v\r\z\g\p\m\h\b\1\v\7\t\r\q\j\6\d\t\3\o\b\2\u\2\r\q\3\h\a\w\3\h\c\h\d\b\e\4\k\e\f\i\8\e\k\3\5\b\m\v\d\u\n\s\o\w\j\q\9\y\3\r\s\f\4\e\7\n\z\e\2\w\o\0\b\k\2\s\t\6\6\x\h\q\5\h\q\4\i\e\w\p\7\m\5\f\n\j\7\8\s\a\y\o\r\b\0\m\d\w\q\j\o\l\q\q\c\k\x\u\e\t\a\8\5\c\9\5\i\a\6\6\e\t\s\o\t\b\z\9\r\d\d\7\q\5\w\o\k\4\i\m\e\t\o\9\6\r\m\s\d\n\o\l\f\5\v\e\6\d\7\l\f\z\k\m\r\t\l\r\g\o\0\7\2\v\i\i\r\t\i\0\l\4\k\9 ]] 00:07:54.101 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.101 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:54.101 [2024-12-16 14:23:46.166834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:54.101 [2024-12-16 14:23:46.166939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74149 ] 00:07:54.360 [2024-12-16 14:23:46.303847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.360 [2024-12-16 14:23:46.322518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.360 [2024-12-16 14:23:46.351586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.360  [2024-12-16T14:23:46.560Z] Copying: 512/512 [B] (average 83 kBps) 00:07:54.360 00:07:54.360 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zpo687vgkxhor1z2snlqgg2zc90m6mqnoadq8f9j2lnkvew75aqzxz76s8x8183r4sws6e3kqh1243r6sb8oj1knqdnklcx1njpavvnnva4sd81vy9d5ba7gjxsnzeq7bwmpgkj129zey494woetw4slc7v2cvpi3zwo7wf5knp7pm2yaonfl3bkplfmqghhclt2j33emh2gy1nrh06fqyw7wrijacazrqy9b25l9u0u4mjsnr6oksk4q8r5bga504vqaoz37vkc9kow9pknpcpnmb8riw265bbtmil9bobtwgsqrzs0d9z14dl7vrzgpmhb1v7trqj6dt3ob2u2rq3haw3hchdbe4kefi8ek35bmvdunsowjq9y3rsf4e7nze2wo0bk2st66xhq5hq4iewp7m5fnj78sayorb0mdwqjolqqckxueta85c95ia66etsotbz9rdd7q5wok4imeto96rmsdnolf5ve6d7lfzkmrtlrgo072viirti0l4k9 == \z\p\o\6\8\7\v\g\k\x\h\o\r\1\z\2\s\n\l\q\g\g\2\z\c\9\0\m\6\m\q\n\o\a\d\q\8\f\9\j\2\l\n\k\v\e\w\7\5\a\q\z\x\z\7\6\s\8\x\8\1\8\3\r\4\s\w\s\6\e\3\k\q\h\1\2\4\3\r\6\s\b\8\o\j\1\k\n\q\d\n\k\l\c\x\1\n\j\p\a\v\v\n\n\v\a\4\s\d\8\1\v\y\9\d\5\b\a\7\g\j\x\s\n\z\e\q\7\b\w\m\p\g\k\j\1\2\9\z\e\y\4\9\4\w\o\e\t\w\4\s\l\c\7\v\2\c\v\p\i\3\z\w\o\7\w\f\5\k\n\p\7\p\m\2\y\a\o\n\f\l\3\b\k\p\l\f\m\q\g\h\h\c\l\t\2\j\3\3\e\m\h\2\g\y\1\n\r\h\0\6\f\q\y\w\7\w\r\i\j\a\c\a\z\r\q\y\9\b\2\5\l\9\u\0\u\4\m\j\s\n\r\6\o\k\s\k\4\q\8\r\5\b\g\a\5\0\4\v\q\a\o\z\3\7\v\k\c\9\k\o\w\9\p\k\n\p\c\p\n\m\b\8\r\i\w\2\6\5\b\b\t\m\i\l\9\b\o\b\t\w\g\s\q\r\z\s\0\d\9\z\1\4\d\l\7\v\r\z\g\p\m\h\b\1\v\7\t\r\q\j\6\d\t\3\o\b\2\u\2\r\q\3\h\a\w\3\h\c\h\d\b\e\4\k\e\f\i\8\e\k\3\5\b\m\v\d\u\n\s\o\w\j\q\9\y\3\r\s\f\4\e\7\n\z\e\2\w\o\0\b\k\2\s\t\6\6\x\h\q\5\h\q\4\i\e\w\p\7\m\5\f\n\j\7\8\s\a\y\o\r\b\0\m\d\w\q\j\o\l\q\q\c\k\x\u\e\t\a\8\5\c\9\5\i\a\6\6\e\t\s\o\t\b\z\9\r\d\d\7\q\5\w\o\k\4\i\m\e\t\o\9\6\r\m\s\d\n\o\l\f\5\v\e\6\d\7\l\f\z\k\m\r\t\l\r\g\o\0\7\2\v\i\i\r\t\i\0\l\4\k\9 ]] 00:07:54.360 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.360 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.360 [2024-12-16 14:23:46.534144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:54.360 [2024-12-16 14:23:46.534245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74153 ] 00:07:54.619 [2024-12-16 14:23:46.678050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.619 [2024-12-16 14:23:46.695761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.619 [2024-12-16 14:23:46.722270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.619  [2024-12-16T14:23:47.079Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.879 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ zpo687vgkxhor1z2snlqgg2zc90m6mqnoadq8f9j2lnkvew75aqzxz76s8x8183r4sws6e3kqh1243r6sb8oj1knqdnklcx1njpavvnnva4sd81vy9d5ba7gjxsnzeq7bwmpgkj129zey494woetw4slc7v2cvpi3zwo7wf5knp7pm2yaonfl3bkplfmqghhclt2j33emh2gy1nrh06fqyw7wrijacazrqy9b25l9u0u4mjsnr6oksk4q8r5bga504vqaoz37vkc9kow9pknpcpnmb8riw265bbtmil9bobtwgsqrzs0d9z14dl7vrzgpmhb1v7trqj6dt3ob2u2rq3haw3hchdbe4kefi8ek35bmvdunsowjq9y3rsf4e7nze2wo0bk2st66xhq5hq4iewp7m5fnj78sayorb0mdwqjolqqckxueta85c95ia66etsotbz9rdd7q5wok4imeto96rmsdnolf5ve6d7lfzkmrtlrgo072viirti0l4k9 == \z\p\o\6\8\7\v\g\k\x\h\o\r\1\z\2\s\n\l\q\g\g\2\z\c\9\0\m\6\m\q\n\o\a\d\q\8\f\9\j\2\l\n\k\v\e\w\7\5\a\q\z\x\z\7\6\s\8\x\8\1\8\3\r\4\s\w\s\6\e\3\k\q\h\1\2\4\3\r\6\s\b\8\o\j\1\k\n\q\d\n\k\l\c\x\1\n\j\p\a\v\v\n\n\v\a\4\s\d\8\1\v\y\9\d\5\b\a\7\g\j\x\s\n\z\e\q\7\b\w\m\p\g\k\j\1\2\9\z\e\y\4\9\4\w\o\e\t\w\4\s\l\c\7\v\2\c\v\p\i\3\z\w\o\7\w\f\5\k\n\p\7\p\m\2\y\a\o\n\f\l\3\b\k\p\l\f\m\q\g\h\h\c\l\t\2\j\3\3\e\m\h\2\g\y\1\n\r\h\0\6\f\q\y\w\7\w\r\i\j\a\c\a\z\r\q\y\9\b\2\5\l\9\u\0\u\4\m\j\s\n\r\6\o\k\s\k\4\q\8\r\5\b\g\a\5\0\4\v\q\a\o\z\3\7\v\k\c\9\k\o\w\9\p\k\n\p\c\p\n\m\b\8\r\i\w\2\6\5\b\b\t\m\i\l\9\b\o\b\t\w\g\s\q\r\z\s\0\d\9\z\1\4\d\l\7\v\r\z\g\p\m\h\b\1\v\7\t\r\q\j\6\d\t\3\o\b\2\u\2\r\q\3\h\a\w\3\h\c\h\d\b\e\4\k\e\f\i\8\e\k\3\5\b\m\v\d\u\n\s\o\w\j\q\9\y\3\r\s\f\4\e\7\n\z\e\2\w\o\0\b\k\2\s\t\6\6\x\h\q\5\h\q\4\i\e\w\p\7\m\5\f\n\j\7\8\s\a\y\o\r\b\0\m\d\w\q\j\o\l\q\q\c\k\x\u\e\t\a\8\5\c\9\5\i\a\6\6\e\t\s\o\t\b\z\9\r\d\d\7\q\5\w\o\k\4\i\m\e\t\o\9\6\r\m\s\d\n\o\l\f\5\v\e\6\d\7\l\f\z\k\m\r\t\l\r\g\o\0\7\2\v\i\i\r\t\i\0\l\4\k\9 ]] 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.879 14:23:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.879 [2024-12-16 14:23:46.886070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:54.879 [2024-12-16 14:23:46.886149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74168 ] 00:07:54.879 [2024-12-16 14:23:47.025053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.879 [2024-12-16 14:23:47.043212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.879 [2024-12-16 14:23:47.070634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.138  [2024-12-16T14:23:47.338Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.138 00:07:55.138 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pwo871k02c9j60eubccnb1kgd6ovwt3eq54u93c1ti0kf1a6ylmfkvh97wg5m0ooil2arfvj98kz2o98vci937txb0deqotv01nq7q5tce37ytioap2pblzcf0rkc9dfpn2ugzeq8e88e5b9p356p0apo9sags09rwkbyr1iedkwab9i39r8cxw3lb15uovqskwuu9aqtcsge5cg0buuozi2ueugm21fqv85plb381sqsv8pld1qr9gu260znredp6amoaozpfjzwy7ryvd356k69mph3cuirjk4a9mlq9wtd17292twdt7qx6zwiitkj3925qxwy4b4mhmk1q718phk8ze4tm52i4dom96ljg4kopbywfjxnd61480ssxo6yh02mka2fs8td5u0li0ru1q7k7rn5o64xm79baavpxgha6juwf7jgk0jg12w1nvacm4t5zvidv507o6ri7r18ztn4b2pgxik4iyaivqc7ze5a2dqi00iee9vtxk4653r == \p\w\o\8\7\1\k\0\2\c\9\j\6\0\e\u\b\c\c\n\b\1\k\g\d\6\o\v\w\t\3\e\q\5\4\u\9\3\c\1\t\i\0\k\f\1\a\6\y\l\m\f\k\v\h\9\7\w\g\5\m\0\o\o\i\l\2\a\r\f\v\j\9\8\k\z\2\o\9\8\v\c\i\9\3\7\t\x\b\0\d\e\q\o\t\v\0\1\n\q\7\q\5\t\c\e\3\7\y\t\i\o\a\p\2\p\b\l\z\c\f\0\r\k\c\9\d\f\p\n\2\u\g\z\e\q\8\e\8\8\e\5\b\9\p\3\5\6\p\0\a\p\o\9\s\a\g\s\0\9\r\w\k\b\y\r\1\i\e\d\k\w\a\b\9\i\3\9\r\8\c\x\w\3\l\b\1\5\u\o\v\q\s\k\w\u\u\9\a\q\t\c\s\g\e\5\c\g\0\b\u\u\o\z\i\2\u\e\u\g\m\2\1\f\q\v\8\5\p\l\b\3\8\1\s\q\s\v\8\p\l\d\1\q\r\9\g\u\2\6\0\z\n\r\e\d\p\6\a\m\o\a\o\z\p\f\j\z\w\y\7\r\y\v\d\3\5\6\k\6\9\m\p\h\3\c\u\i\r\j\k\4\a\9\m\l\q\9\w\t\d\1\7\2\9\2\t\w\d\t\7\q\x\6\z\w\i\i\t\k\j\3\9\2\5\q\x\w\y\4\b\4\m\h\m\k\1\q\7\1\8\p\h\k\8\z\e\4\t\m\5\2\i\4\d\o\m\9\6\l\j\g\4\k\o\p\b\y\w\f\j\x\n\d\6\1\4\8\0\s\s\x\o\6\y\h\0\2\m\k\a\2\f\s\8\t\d\5\u\0\l\i\0\r\u\1\q\7\k\7\r\n\5\o\6\4\x\m\7\9\b\a\a\v\p\x\g\h\a\6\j\u\w\f\7\j\g\k\0\j\g\1\2\w\1\n\v\a\c\m\4\t\5\z\v\i\d\v\5\0\7\o\6\r\i\7\r\1\8\z\t\n\4\b\2\p\g\x\i\k\4\i\y\a\i\v\q\c\7\z\e\5\a\2\d\q\i\0\0\i\e\e\9\v\t\x\k\4\6\5\3\r ]] 00:07:55.138 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.138 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:55.138 [2024-12-16 14:23:47.223212] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:55.138 [2024-12-16 14:23:47.223300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74172 ] 00:07:55.398 [2024-12-16 14:23:47.353981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.398 [2024-12-16 14:23:47.371887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.398 [2024-12-16 14:23:47.399610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.398  [2024-12-16T14:23:47.598Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.398 00:07:55.398 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pwo871k02c9j60eubccnb1kgd6ovwt3eq54u93c1ti0kf1a6ylmfkvh97wg5m0ooil2arfvj98kz2o98vci937txb0deqotv01nq7q5tce37ytioap2pblzcf0rkc9dfpn2ugzeq8e88e5b9p356p0apo9sags09rwkbyr1iedkwab9i39r8cxw3lb15uovqskwuu9aqtcsge5cg0buuozi2ueugm21fqv85plb381sqsv8pld1qr9gu260znredp6amoaozpfjzwy7ryvd356k69mph3cuirjk4a9mlq9wtd17292twdt7qx6zwiitkj3925qxwy4b4mhmk1q718phk8ze4tm52i4dom96ljg4kopbywfjxnd61480ssxo6yh02mka2fs8td5u0li0ru1q7k7rn5o64xm79baavpxgha6juwf7jgk0jg12w1nvacm4t5zvidv507o6ri7r18ztn4b2pgxik4iyaivqc7ze5a2dqi00iee9vtxk4653r == \p\w\o\8\7\1\k\0\2\c\9\j\6\0\e\u\b\c\c\n\b\1\k\g\d\6\o\v\w\t\3\e\q\5\4\u\9\3\c\1\t\i\0\k\f\1\a\6\y\l\m\f\k\v\h\9\7\w\g\5\m\0\o\o\i\l\2\a\r\f\v\j\9\8\k\z\2\o\9\8\v\c\i\9\3\7\t\x\b\0\d\e\q\o\t\v\0\1\n\q\7\q\5\t\c\e\3\7\y\t\i\o\a\p\2\p\b\l\z\c\f\0\r\k\c\9\d\f\p\n\2\u\g\z\e\q\8\e\8\8\e\5\b\9\p\3\5\6\p\0\a\p\o\9\s\a\g\s\0\9\r\w\k\b\y\r\1\i\e\d\k\w\a\b\9\i\3\9\r\8\c\x\w\3\l\b\1\5\u\o\v\q\s\k\w\u\u\9\a\q\t\c\s\g\e\5\c\g\0\b\u\u\o\z\i\2\u\e\u\g\m\2\1\f\q\v\8\5\p\l\b\3\8\1\s\q\s\v\8\p\l\d\1\q\r\9\g\u\2\6\0\z\n\r\e\d\p\6\a\m\o\a\o\z\p\f\j\z\w\y\7\r\y\v\d\3\5\6\k\6\9\m\p\h\3\c\u\i\r\j\k\4\a\9\m\l\q\9\w\t\d\1\7\2\9\2\t\w\d\t\7\q\x\6\z\w\i\i\t\k\j\3\9\2\5\q\x\w\y\4\b\4\m\h\m\k\1\q\7\1\8\p\h\k\8\z\e\4\t\m\5\2\i\4\d\o\m\9\6\l\j\g\4\k\o\p\b\y\w\f\j\x\n\d\6\1\4\8\0\s\s\x\o\6\y\h\0\2\m\k\a\2\f\s\8\t\d\5\u\0\l\i\0\r\u\1\q\7\k\7\r\n\5\o\6\4\x\m\7\9\b\a\a\v\p\x\g\h\a\6\j\u\w\f\7\j\g\k\0\j\g\1\2\w\1\n\v\a\c\m\4\t\5\z\v\i\d\v\5\0\7\o\6\r\i\7\r\1\8\z\t\n\4\b\2\p\g\x\i\k\4\i\y\a\i\v\q\c\7\z\e\5\a\2\d\q\i\0\0\i\e\e\9\v\t\x\k\4\6\5\3\r ]] 00:07:55.398 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.398 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.398 [2024-12-16 14:23:47.575358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:55.398 [2024-12-16 14:23:47.575478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74176 ] 00:07:55.657 [2024-12-16 14:23:47.720194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.657 [2024-12-16 14:23:47.738150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.657 [2024-12-16 14:23:47.764413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.657  [2024-12-16T14:23:48.116Z] Copying: 512/512 [B] (average 166 kBps) 00:07:55.916 00:07:55.916 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pwo871k02c9j60eubccnb1kgd6ovwt3eq54u93c1ti0kf1a6ylmfkvh97wg5m0ooil2arfvj98kz2o98vci937txb0deqotv01nq7q5tce37ytioap2pblzcf0rkc9dfpn2ugzeq8e88e5b9p356p0apo9sags09rwkbyr1iedkwab9i39r8cxw3lb15uovqskwuu9aqtcsge5cg0buuozi2ueugm21fqv85plb381sqsv8pld1qr9gu260znredp6amoaozpfjzwy7ryvd356k69mph3cuirjk4a9mlq9wtd17292twdt7qx6zwiitkj3925qxwy4b4mhmk1q718phk8ze4tm52i4dom96ljg4kopbywfjxnd61480ssxo6yh02mka2fs8td5u0li0ru1q7k7rn5o64xm79baavpxgha6juwf7jgk0jg12w1nvacm4t5zvidv507o6ri7r18ztn4b2pgxik4iyaivqc7ze5a2dqi00iee9vtxk4653r == \p\w\o\8\7\1\k\0\2\c\9\j\6\0\e\u\b\c\c\n\b\1\k\g\d\6\o\v\w\t\3\e\q\5\4\u\9\3\c\1\t\i\0\k\f\1\a\6\y\l\m\f\k\v\h\9\7\w\g\5\m\0\o\o\i\l\2\a\r\f\v\j\9\8\k\z\2\o\9\8\v\c\i\9\3\7\t\x\b\0\d\e\q\o\t\v\0\1\n\q\7\q\5\t\c\e\3\7\y\t\i\o\a\p\2\p\b\l\z\c\f\0\r\k\c\9\d\f\p\n\2\u\g\z\e\q\8\e\8\8\e\5\b\9\p\3\5\6\p\0\a\p\o\9\s\a\g\s\0\9\r\w\k\b\y\r\1\i\e\d\k\w\a\b\9\i\3\9\r\8\c\x\w\3\l\b\1\5\u\o\v\q\s\k\w\u\u\9\a\q\t\c\s\g\e\5\c\g\0\b\u\u\o\z\i\2\u\e\u\g\m\2\1\f\q\v\8\5\p\l\b\3\8\1\s\q\s\v\8\p\l\d\1\q\r\9\g\u\2\6\0\z\n\r\e\d\p\6\a\m\o\a\o\z\p\f\j\z\w\y\7\r\y\v\d\3\5\6\k\6\9\m\p\h\3\c\u\i\r\j\k\4\a\9\m\l\q\9\w\t\d\1\7\2\9\2\t\w\d\t\7\q\x\6\z\w\i\i\t\k\j\3\9\2\5\q\x\w\y\4\b\4\m\h\m\k\1\q\7\1\8\p\h\k\8\z\e\4\t\m\5\2\i\4\d\o\m\9\6\l\j\g\4\k\o\p\b\y\w\f\j\x\n\d\6\1\4\8\0\s\s\x\o\6\y\h\0\2\m\k\a\2\f\s\8\t\d\5\u\0\l\i\0\r\u\1\q\7\k\7\r\n\5\o\6\4\x\m\7\9\b\a\a\v\p\x\g\h\a\6\j\u\w\f\7\j\g\k\0\j\g\1\2\w\1\n\v\a\c\m\4\t\5\z\v\i\d\v\5\0\7\o\6\r\i\7\r\1\8\z\t\n\4\b\2\p\g\x\i\k\4\i\y\a\i\v\q\c\7\z\e\5\a\2\d\q\i\0\0\i\e\e\9\v\t\x\k\4\6\5\3\r ]] 00:07:55.916 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.916 14:23:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.916 [2024-12-16 14:23:47.939857] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:55.916 [2024-12-16 14:23:47.939962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:07:55.916 [2024-12-16 14:23:48.082743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.916 [2024-12-16 14:23:48.100199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.176 [2024-12-16 14:23:48.127135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.176  [2024-12-16T14:23:48.376Z] Copying: 512/512 [B] (average 250 kBps) 00:07:56.176 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pwo871k02c9j60eubccnb1kgd6ovwt3eq54u93c1ti0kf1a6ylmfkvh97wg5m0ooil2arfvj98kz2o98vci937txb0deqotv01nq7q5tce37ytioap2pblzcf0rkc9dfpn2ugzeq8e88e5b9p356p0apo9sags09rwkbyr1iedkwab9i39r8cxw3lb15uovqskwuu9aqtcsge5cg0buuozi2ueugm21fqv85plb381sqsv8pld1qr9gu260znredp6amoaozpfjzwy7ryvd356k69mph3cuirjk4a9mlq9wtd17292twdt7qx6zwiitkj3925qxwy4b4mhmk1q718phk8ze4tm52i4dom96ljg4kopbywfjxnd61480ssxo6yh02mka2fs8td5u0li0ru1q7k7rn5o64xm79baavpxgha6juwf7jgk0jg12w1nvacm4t5zvidv507o6ri7r18ztn4b2pgxik4iyaivqc7ze5a2dqi00iee9vtxk4653r == \p\w\o\8\7\1\k\0\2\c\9\j\6\0\e\u\b\c\c\n\b\1\k\g\d\6\o\v\w\t\3\e\q\5\4\u\9\3\c\1\t\i\0\k\f\1\a\6\y\l\m\f\k\v\h\9\7\w\g\5\m\0\o\o\i\l\2\a\r\f\v\j\9\8\k\z\2\o\9\8\v\c\i\9\3\7\t\x\b\0\d\e\q\o\t\v\0\1\n\q\7\q\5\t\c\e\3\7\y\t\i\o\a\p\2\p\b\l\z\c\f\0\r\k\c\9\d\f\p\n\2\u\g\z\e\q\8\e\8\8\e\5\b\9\p\3\5\6\p\0\a\p\o\9\s\a\g\s\0\9\r\w\k\b\y\r\1\i\e\d\k\w\a\b\9\i\3\9\r\8\c\x\w\3\l\b\1\5\u\o\v\q\s\k\w\u\u\9\a\q\t\c\s\g\e\5\c\g\0\b\u\u\o\z\i\2\u\e\u\g\m\2\1\f\q\v\8\5\p\l\b\3\8\1\s\q\s\v\8\p\l\d\1\q\r\9\g\u\2\6\0\z\n\r\e\d\p\6\a\m\o\a\o\z\p\f\j\z\w\y\7\r\y\v\d\3\5\6\k\6\9\m\p\h\3\c\u\i\r\j\k\4\a\9\m\l\q\9\w\t\d\1\7\2\9\2\t\w\d\t\7\q\x\6\z\w\i\i\t\k\j\3\9\2\5\q\x\w\y\4\b\4\m\h\m\k\1\q\7\1\8\p\h\k\8\z\e\4\t\m\5\2\i\4\d\o\m\9\6\l\j\g\4\k\o\p\b\y\w\f\j\x\n\d\6\1\4\8\0\s\s\x\o\6\y\h\0\2\m\k\a\2\f\s\8\t\d\5\u\0\l\i\0\r\u\1\q\7\k\7\r\n\5\o\6\4\x\m\7\9\b\a\a\v\p\x\g\h\a\6\j\u\w\f\7\j\g\k\0\j\g\1\2\w\1\n\v\a\c\m\4\t\5\z\v\i\d\v\5\0\7\o\6\r\i\7\r\1\8\z\t\n\4\b\2\p\g\x\i\k\4\i\y\a\i\v\q\c\7\z\e\5\a\2\d\q\i\0\0\i\e\e\9\v\t\x\k\4\6\5\3\r ]] 00:07:56.176 00:07:56.176 real 0m2.839s 00:07:56.176 user 0m1.327s 00:07:56.176 sys 0m1.282s 00:07:56.176 ************************************ 00:07:56.176 END TEST dd_flags_misc 00:07:56.176 ************************************ 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:56.176 * Second test run, disabling liburing, forcing AIO 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.176 ************************************ 00:07:56.176 START TEST dd_flag_append_forced_aio 00:07:56.176 ************************************ 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=3vajtlceltfaencjmpe9u3wehuxkf1vv 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=1n3k69snb1g8zxsi5x9snlb3c1dav34j 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 3vajtlceltfaencjmpe9u3wehuxkf1vv 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 1n3k69snb1g8zxsi5x9snlb3c1dav34j 00:07:56.176 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:56.176 [2024-12-16 14:23:48.364638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:56.176 [2024-12-16 14:23:48.364739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74214 ] 00:07:56.435 [2024-12-16 14:23:48.508297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.435 [2024-12-16 14:23:48.526144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.435 [2024-12-16 14:23:48.555435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.435  [2024-12-16T14:23:48.895Z] Copying: 32/32 [B] (average 31 kBps) 00:07:56.695 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 1n3k69snb1g8zxsi5x9snlb3c1dav34j3vajtlceltfaencjmpe9u3wehuxkf1vv == \1\n\3\k\6\9\s\n\b\1\g\8\z\x\s\i\5\x\9\s\n\l\b\3\c\1\d\a\v\3\4\j\3\v\a\j\t\l\c\e\l\t\f\a\e\n\c\j\m\p\e\9\u\3\w\e\h\u\x\k\f\1\v\v ]] 00:07:56.695 00:07:56.695 real 0m0.384s 00:07:56.695 user 0m0.163s 00:07:56.695 sys 0m0.098s 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.695 ************************************ 00:07:56.695 END TEST dd_flag_append_forced_aio 00:07:56.695 ************************************ 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.695 ************************************ 00:07:56.695 START TEST dd_flag_directory_forced_aio 00:07:56.695 ************************************ 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.695 14:23:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:56.695 [2024-12-16 14:23:48.797520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:56.695 [2024-12-16 14:23:48.797621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74240 ] 00:07:56.954 [2024-12-16 14:23:48.932762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.954 [2024-12-16 14:23:48.951105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.954 [2024-12-16 14:23:48.977448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.954 [2024-12-16 14:23:48.992618] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.954 [2024-12-16 14:23:48.992676] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.954 [2024-12-16 14:23:48.992710] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.954 [2024-12-16 14:23:49.049138] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.954 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.955 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:56.955 [2024-12-16 14:23:49.145513] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:56.955 [2024-12-16 14:23:49.145609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74250 ] 00:07:57.213 [2024-12-16 14:23:49.290595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.213 [2024-12-16 14:23:49.307983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.213 [2024-12-16 14:23:49.334038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.213 [2024-12-16 14:23:49.349230] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:57.213 [2024-12-16 14:23:49.349290] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:57.213 [2024-12-16 14:23:49.349326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.213 [2024-12-16 14:23:49.405099] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.473 00:07:57.473 real 0m0.706s 00:07:57.473 user 0m0.325s 00:07:57.473 sys 0m0.174s 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.473 ************************************ 00:07:57.473 END TEST dd_flag_directory_forced_aio 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.473 ************************************ 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.473 ************************************ 00:07:57.473 START TEST dd_flag_nofollow_forced_aio 00:07:57.473 ************************************ 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.473 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.473 [2024-12-16 14:23:49.566767] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:57.473 [2024-12-16 14:23:49.566871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74273 ] 00:07:57.732 [2024-12-16 14:23:49.711737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.732 [2024-12-16 14:23:49.731691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.732 [2024-12-16 14:23:49.760491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.732 [2024-12-16 14:23:49.777476] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:57.732 [2024-12-16 14:23:49.777565] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:57.732 [2024-12-16 14:23:49.777600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.732 [2024-12-16 14:23:49.836343] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:57.732 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.733 14:23:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:58.035 [2024-12-16 14:23:49.934522] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:58.035 [2024-12-16 14:23:49.935286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74287 ] 00:07:58.035 [2024-12-16 14:23:50.079446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.035 [2024-12-16 14:23:50.097005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.035 [2024-12-16 14:23:50.122940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.035 [2024-12-16 14:23:50.138030] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:58.035 [2024-12-16 14:23:50.138089] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:58.035 [2024-12-16 14:23:50.138125] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.035 [2024-12-16 14:23:50.196583] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.309 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.309 [2024-12-16 14:23:50.296862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:58.309 [2024-12-16 14:23:50.296969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74292 ] 00:07:58.309 [2024-12-16 14:23:50.436252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.309 [2024-12-16 14:23:50.453506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.309 [2024-12-16 14:23:50.479517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.309  [2024-12-16T14:23:50.772Z] Copying: 512/512 [B] (average 500 kBps) 00:07:58.572 00:07:58.572 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 1quaegan6saqbli8pfd2iqn5y4hcblynjutbn38xp3b9ro58aqc7aqew7dvziwa3abk350xe5xcv0dk6yv3jwrcgzqq8mretix1gcl30hye9x9y63tww67lsym47kbrh7yh04gy2j60vokhmrq9pkq88y0ou6w4zf6wq7pnlez6lnymjcvqugupjgh02zp332hml0zanau2huoj5cklap42qzgthl52u3t0gevjnwugf6mmjpnww5rcdfre497k9n90qoim78tzwor9f6bxcdchbwbhijiv7etwbx3ed9q8el1uv1ker2q7h9jli3uzctqdvo9o9ksguc3di4yzofd7r01181rkjd3wwhpcavvferior73sj156svu7qe7msk9jf4cs2qsjlqudf274vpm887n5uolnpgsxgs7myse4c1as8w7lu0109jncmptg6qgx1wtsr7m8rkqpjlibxrkrxke74ztabmprgipudw2e0lxtgffg66r8i3crhr9y4 == \1\q\u\a\e\g\a\n\6\s\a\q\b\l\i\8\p\f\d\2\i\q\n\5\y\4\h\c\b\l\y\n\j\u\t\b\n\3\8\x\p\3\b\9\r\o\5\8\a\q\c\7\a\q\e\w\7\d\v\z\i\w\a\3\a\b\k\3\5\0\x\e\5\x\c\v\0\d\k\6\y\v\3\j\w\r\c\g\z\q\q\8\m\r\e\t\i\x\1\g\c\l\3\0\h\y\e\9\x\9\y\6\3\t\w\w\6\7\l\s\y\m\4\7\k\b\r\h\7\y\h\0\4\g\y\2\j\6\0\v\o\k\h\m\r\q\9\p\k\q\8\8\y\0\o\u\6\w\4\z\f\6\w\q\7\p\n\l\e\z\6\l\n\y\m\j\c\v\q\u\g\u\p\j\g\h\0\2\z\p\3\3\2\h\m\l\0\z\a\n\a\u\2\h\u\o\j\5\c\k\l\a\p\4\2\q\z\g\t\h\l\5\2\u\3\t\0\g\e\v\j\n\w\u\g\f\6\m\m\j\p\n\w\w\5\r\c\d\f\r\e\4\9\7\k\9\n\9\0\q\o\i\m\7\8\t\z\w\o\r\9\f\6\b\x\c\d\c\h\b\w\b\h\i\j\i\v\7\e\t\w\b\x\3\e\d\9\q\8\e\l\1\u\v\1\k\e\r\2\q\7\h\9\j\l\i\3\u\z\c\t\q\d\v\o\9\o\9\k\s\g\u\c\3\d\i\4\y\z\o\f\d\7\r\0\1\1\8\1\r\k\j\d\3\w\w\h\p\c\a\v\v\f\e\r\i\o\r\7\3\s\j\1\5\6\s\v\u\7\q\e\7\m\s\k\9\j\f\4\c\s\2\q\s\j\l\q\u\d\f\2\7\4\v\p\m\8\8\7\n\5\u\o\l\n\p\g\s\x\g\s\7\m\y\s\e\4\c\1\a\s\8\w\7\l\u\0\1\0\9\j\n\c\m\p\t\g\6\q\g\x\1\w\t\s\r\7\m\8\r\k\q\p\j\l\i\b\x\r\k\r\x\k\e\7\4\z\t\a\b\m\p\r\g\i\p\u\d\w\2\e\0\l\x\t\g\f\f\g\6\6\r\8\i\3\c\r\h\r\9\y\4 ]] 00:07:58.572 00:07:58.572 real 0m1.118s 00:07:58.572 user 0m0.519s 00:07:58.572 sys 0m0.269s 00:07:58.572 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.572 ************************************ 00:07:58.572 END TEST dd_flag_nofollow_forced_aio 00:07:58.572 ************************************ 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 ************************************ 00:07:58.573 START TEST dd_flag_noatime_forced_aio 00:07:58.573 ************************************ 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1734359030 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1734359030 00:07:58.573 14:23:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:59.509 14:23:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.767 [2024-12-16 14:23:51.750869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:59.767 [2024-12-16 14:23:51.750996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74327 ] 00:07:59.767 [2024-12-16 14:23:51.906058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.767 [2024-12-16 14:23:51.929981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.767 [2024-12-16 14:23:51.964888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.027  [2024-12-16T14:23:52.227Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.027 00:08:00.027 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.027 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1734359030 )) 00:08:00.027 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.027 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1734359030 )) 00:08:00.027 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.027 [2024-12-16 14:23:52.180055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:00.027 [2024-12-16 14:23:52.180147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74344 ] 00:08:00.286 [2024-12-16 14:23:52.331010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.286 [2024-12-16 14:23:52.354003] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.286 [2024-12-16 14:23:52.386228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.286  [2024-12-16T14:23:52.744Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.544 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1734359032 )) 00:08:00.544 00:08:00.544 real 0m1.860s 00:08:00.544 user 0m0.394s 00:08:00.544 sys 0m0.226s 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.544 ************************************ 00:08:00.544 END TEST dd_flag_noatime_forced_aio 00:08:00.544 ************************************ 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.544 ************************************ 00:08:00.544 START TEST dd_flags_misc_forced_aio 00:08:00.544 ************************************ 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.544 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:00.544 [2024-12-16 14:23:52.649360] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:00.544 [2024-12-16 14:23:52.649483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74365 ] 00:08:00.802 [2024-12-16 14:23:52.794224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.802 [2024-12-16 14:23:52.811894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.802 [2024-12-16 14:23:52.837907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.802  [2024-12-16T14:23:53.002Z] Copying: 512/512 [B] (average 500 kBps) 00:08:00.802 00:08:00.802 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dqxjb2btnj6xrid91j6w20483fq14lw0esn2i750dichbiz8w06kfyn5kfbhlrlhaeo17jey2ed6z868wb5so5d8984jf22yyqpci55tcze95zj84unxfozx9aypa95nnjiowz0jjyqomi2dah7wroigfk1y6n4iw6we19sqiop2vf3sfne1wsyz76kkf4erbrej58omv6vm1uqs7km155j425xz0dig0y6s4vacmu2cocbrzhyvih90gwauihg89sllch02g95vybrkyoapsn8ax13bh0gux7u1sypt8zgm8siqa3cbtu3txgpe49x6za9q0st3vjjtjhfkgtiub85ca90worn218o5kwnp1b5q2t4ga6u4hsyjnh0rs1xa8fd6tca8joc0x32067ohuoeias7zxzrsc9n75asnkl1hjy26w1chgg1ur2jip21xrwozy2vj7u6rotv9tz8rt2rzfc2ib7oopjrd7v8eufsdxgptiqhhtpln83crmjwf == \d\q\x\j\b\2\b\t\n\j\6\x\r\i\d\9\1\j\6\w\2\0\4\8\3\f\q\1\4\l\w\0\e\s\n\2\i\7\5\0\d\i\c\h\b\i\z\8\w\0\6\k\f\y\n\5\k\f\b\h\l\r\l\h\a\e\o\1\7\j\e\y\2\e\d\6\z\8\6\8\w\b\5\s\o\5\d\8\9\8\4\j\f\2\2\y\y\q\p\c\i\5\5\t\c\z\e\9\5\z\j\8\4\u\n\x\f\o\z\x\9\a\y\p\a\9\5\n\n\j\i\o\w\z\0\j\j\y\q\o\m\i\2\d\a\h\7\w\r\o\i\g\f\k\1\y\6\n\4\i\w\6\w\e\1\9\s\q\i\o\p\2\v\f\3\s\f\n\e\1\w\s\y\z\7\6\k\k\f\4\e\r\b\r\e\j\5\8\o\m\v\6\v\m\1\u\q\s\7\k\m\1\5\5\j\4\2\5\x\z\0\d\i\g\0\y\6\s\4\v\a\c\m\u\2\c\o\c\b\r\z\h\y\v\i\h\9\0\g\w\a\u\i\h\g\8\9\s\l\l\c\h\0\2\g\9\5\v\y\b\r\k\y\o\a\p\s\n\8\a\x\1\3\b\h\0\g\u\x\7\u\1\s\y\p\t\8\z\g\m\8\s\i\q\a\3\c\b\t\u\3\t\x\g\p\e\4\9\x\6\z\a\9\q\0\s\t\3\v\j\j\t\j\h\f\k\g\t\i\u\b\8\5\c\a\9\0\w\o\r\n\2\1\8\o\5\k\w\n\p\1\b\5\q\2\t\4\g\a\6\u\4\h\s\y\j\n\h\0\r\s\1\x\a\8\f\d\6\t\c\a\8\j\o\c\0\x\3\2\0\6\7\o\h\u\o\e\i\a\s\7\z\x\z\r\s\c\9\n\7\5\a\s\n\k\l\1\h\j\y\2\6\w\1\c\h\g\g\1\u\r\2\j\i\p\2\1\x\r\w\o\z\y\2\v\j\7\u\6\r\o\t\v\9\t\z\8\r\t\2\r\z\f\c\2\i\b\7\o\o\p\j\r\d\7\v\8\e\u\f\s\d\x\g\p\t\i\q\h\h\t\p\l\n\8\3\c\r\m\j\w\f ]] 00:08:00.802 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.802 14:23:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:01.060 [2024-12-16 14:23:53.034743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:01.060 [2024-12-16 14:23:53.034863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74378 ] 00:08:01.060 [2024-12-16 14:23:53.179037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.060 [2024-12-16 14:23:53.196556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.060 [2024-12-16 14:23:53.225059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.060  [2024-12-16T14:23:53.524Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.324 00:08:01.324 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dqxjb2btnj6xrid91j6w20483fq14lw0esn2i750dichbiz8w06kfyn5kfbhlrlhaeo17jey2ed6z868wb5so5d8984jf22yyqpci55tcze95zj84unxfozx9aypa95nnjiowz0jjyqomi2dah7wroigfk1y6n4iw6we19sqiop2vf3sfne1wsyz76kkf4erbrej58omv6vm1uqs7km155j425xz0dig0y6s4vacmu2cocbrzhyvih90gwauihg89sllch02g95vybrkyoapsn8ax13bh0gux7u1sypt8zgm8siqa3cbtu3txgpe49x6za9q0st3vjjtjhfkgtiub85ca90worn218o5kwnp1b5q2t4ga6u4hsyjnh0rs1xa8fd6tca8joc0x32067ohuoeias7zxzrsc9n75asnkl1hjy26w1chgg1ur2jip21xrwozy2vj7u6rotv9tz8rt2rzfc2ib7oopjrd7v8eufsdxgptiqhhtpln83crmjwf == \d\q\x\j\b\2\b\t\n\j\6\x\r\i\d\9\1\j\6\w\2\0\4\8\3\f\q\1\4\l\w\0\e\s\n\2\i\7\5\0\d\i\c\h\b\i\z\8\w\0\6\k\f\y\n\5\k\f\b\h\l\r\l\h\a\e\o\1\7\j\e\y\2\e\d\6\z\8\6\8\w\b\5\s\o\5\d\8\9\8\4\j\f\2\2\y\y\q\p\c\i\5\5\t\c\z\e\9\5\z\j\8\4\u\n\x\f\o\z\x\9\a\y\p\a\9\5\n\n\j\i\o\w\z\0\j\j\y\q\o\m\i\2\d\a\h\7\w\r\o\i\g\f\k\1\y\6\n\4\i\w\6\w\e\1\9\s\q\i\o\p\2\v\f\3\s\f\n\e\1\w\s\y\z\7\6\k\k\f\4\e\r\b\r\e\j\5\8\o\m\v\6\v\m\1\u\q\s\7\k\m\1\5\5\j\4\2\5\x\z\0\d\i\g\0\y\6\s\4\v\a\c\m\u\2\c\o\c\b\r\z\h\y\v\i\h\9\0\g\w\a\u\i\h\g\8\9\s\l\l\c\h\0\2\g\9\5\v\y\b\r\k\y\o\a\p\s\n\8\a\x\1\3\b\h\0\g\u\x\7\u\1\s\y\p\t\8\z\g\m\8\s\i\q\a\3\c\b\t\u\3\t\x\g\p\e\4\9\x\6\z\a\9\q\0\s\t\3\v\j\j\t\j\h\f\k\g\t\i\u\b\8\5\c\a\9\0\w\o\r\n\2\1\8\o\5\k\w\n\p\1\b\5\q\2\t\4\g\a\6\u\4\h\s\y\j\n\h\0\r\s\1\x\a\8\f\d\6\t\c\a\8\j\o\c\0\x\3\2\0\6\7\o\h\u\o\e\i\a\s\7\z\x\z\r\s\c\9\n\7\5\a\s\n\k\l\1\h\j\y\2\6\w\1\c\h\g\g\1\u\r\2\j\i\p\2\1\x\r\w\o\z\y\2\v\j\7\u\6\r\o\t\v\9\t\z\8\r\t\2\r\z\f\c\2\i\b\7\o\o\p\j\r\d\7\v\8\e\u\f\s\d\x\g\p\t\i\q\h\h\t\p\l\n\8\3\c\r\m\j\w\f ]] 00:08:01.324 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.324 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:01.324 [2024-12-16 14:23:53.411165] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:01.324 [2024-12-16 14:23:53.411276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74380 ] 00:08:01.584 [2024-12-16 14:23:53.554935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.584 [2024-12-16 14:23:53.575073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.584 [2024-12-16 14:23:53.601285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.584  [2024-12-16T14:23:53.784Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.584 00:08:01.584 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dqxjb2btnj6xrid91j6w20483fq14lw0esn2i750dichbiz8w06kfyn5kfbhlrlhaeo17jey2ed6z868wb5so5d8984jf22yyqpci55tcze95zj84unxfozx9aypa95nnjiowz0jjyqomi2dah7wroigfk1y6n4iw6we19sqiop2vf3sfne1wsyz76kkf4erbrej58omv6vm1uqs7km155j425xz0dig0y6s4vacmu2cocbrzhyvih90gwauihg89sllch02g95vybrkyoapsn8ax13bh0gux7u1sypt8zgm8siqa3cbtu3txgpe49x6za9q0st3vjjtjhfkgtiub85ca90worn218o5kwnp1b5q2t4ga6u4hsyjnh0rs1xa8fd6tca8joc0x32067ohuoeias7zxzrsc9n75asnkl1hjy26w1chgg1ur2jip21xrwozy2vj7u6rotv9tz8rt2rzfc2ib7oopjrd7v8eufsdxgptiqhhtpln83crmjwf == \d\q\x\j\b\2\b\t\n\j\6\x\r\i\d\9\1\j\6\w\2\0\4\8\3\f\q\1\4\l\w\0\e\s\n\2\i\7\5\0\d\i\c\h\b\i\z\8\w\0\6\k\f\y\n\5\k\f\b\h\l\r\l\h\a\e\o\1\7\j\e\y\2\e\d\6\z\8\6\8\w\b\5\s\o\5\d\8\9\8\4\j\f\2\2\y\y\q\p\c\i\5\5\t\c\z\e\9\5\z\j\8\4\u\n\x\f\o\z\x\9\a\y\p\a\9\5\n\n\j\i\o\w\z\0\j\j\y\q\o\m\i\2\d\a\h\7\w\r\o\i\g\f\k\1\y\6\n\4\i\w\6\w\e\1\9\s\q\i\o\p\2\v\f\3\s\f\n\e\1\w\s\y\z\7\6\k\k\f\4\e\r\b\r\e\j\5\8\o\m\v\6\v\m\1\u\q\s\7\k\m\1\5\5\j\4\2\5\x\z\0\d\i\g\0\y\6\s\4\v\a\c\m\u\2\c\o\c\b\r\z\h\y\v\i\h\9\0\g\w\a\u\i\h\g\8\9\s\l\l\c\h\0\2\g\9\5\v\y\b\r\k\y\o\a\p\s\n\8\a\x\1\3\b\h\0\g\u\x\7\u\1\s\y\p\t\8\z\g\m\8\s\i\q\a\3\c\b\t\u\3\t\x\g\p\e\4\9\x\6\z\a\9\q\0\s\t\3\v\j\j\t\j\h\f\k\g\t\i\u\b\8\5\c\a\9\0\w\o\r\n\2\1\8\o\5\k\w\n\p\1\b\5\q\2\t\4\g\a\6\u\4\h\s\y\j\n\h\0\r\s\1\x\a\8\f\d\6\t\c\a\8\j\o\c\0\x\3\2\0\6\7\o\h\u\o\e\i\a\s\7\z\x\z\r\s\c\9\n\7\5\a\s\n\k\l\1\h\j\y\2\6\w\1\c\h\g\g\1\u\r\2\j\i\p\2\1\x\r\w\o\z\y\2\v\j\7\u\6\r\o\t\v\9\t\z\8\r\t\2\r\z\f\c\2\i\b\7\o\o\p\j\r\d\7\v\8\e\u\f\s\d\x\g\p\t\i\q\h\h\t\p\l\n\8\3\c\r\m\j\w\f ]] 00:08:01.584 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.584 14:23:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:01.843 [2024-12-16 14:23:53.788777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:01.843 [2024-12-16 14:23:53.788878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74382 ] 00:08:01.843 [2024-12-16 14:23:53.933093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.843 [2024-12-16 14:23:53.954883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.843 [2024-12-16 14:23:53.983034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.843  [2024-12-16T14:23:54.303Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.103 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dqxjb2btnj6xrid91j6w20483fq14lw0esn2i750dichbiz8w06kfyn5kfbhlrlhaeo17jey2ed6z868wb5so5d8984jf22yyqpci55tcze95zj84unxfozx9aypa95nnjiowz0jjyqomi2dah7wroigfk1y6n4iw6we19sqiop2vf3sfne1wsyz76kkf4erbrej58omv6vm1uqs7km155j425xz0dig0y6s4vacmu2cocbrzhyvih90gwauihg89sllch02g95vybrkyoapsn8ax13bh0gux7u1sypt8zgm8siqa3cbtu3txgpe49x6za9q0st3vjjtjhfkgtiub85ca90worn218o5kwnp1b5q2t4ga6u4hsyjnh0rs1xa8fd6tca8joc0x32067ohuoeias7zxzrsc9n75asnkl1hjy26w1chgg1ur2jip21xrwozy2vj7u6rotv9tz8rt2rzfc2ib7oopjrd7v8eufsdxgptiqhhtpln83crmjwf == \d\q\x\j\b\2\b\t\n\j\6\x\r\i\d\9\1\j\6\w\2\0\4\8\3\f\q\1\4\l\w\0\e\s\n\2\i\7\5\0\d\i\c\h\b\i\z\8\w\0\6\k\f\y\n\5\k\f\b\h\l\r\l\h\a\e\o\1\7\j\e\y\2\e\d\6\z\8\6\8\w\b\5\s\o\5\d\8\9\8\4\j\f\2\2\y\y\q\p\c\i\5\5\t\c\z\e\9\5\z\j\8\4\u\n\x\f\o\z\x\9\a\y\p\a\9\5\n\n\j\i\o\w\z\0\j\j\y\q\o\m\i\2\d\a\h\7\w\r\o\i\g\f\k\1\y\6\n\4\i\w\6\w\e\1\9\s\q\i\o\p\2\v\f\3\s\f\n\e\1\w\s\y\z\7\6\k\k\f\4\e\r\b\r\e\j\5\8\o\m\v\6\v\m\1\u\q\s\7\k\m\1\5\5\j\4\2\5\x\z\0\d\i\g\0\y\6\s\4\v\a\c\m\u\2\c\o\c\b\r\z\h\y\v\i\h\9\0\g\w\a\u\i\h\g\8\9\s\l\l\c\h\0\2\g\9\5\v\y\b\r\k\y\o\a\p\s\n\8\a\x\1\3\b\h\0\g\u\x\7\u\1\s\y\p\t\8\z\g\m\8\s\i\q\a\3\c\b\t\u\3\t\x\g\p\e\4\9\x\6\z\a\9\q\0\s\t\3\v\j\j\t\j\h\f\k\g\t\i\u\b\8\5\c\a\9\0\w\o\r\n\2\1\8\o\5\k\w\n\p\1\b\5\q\2\t\4\g\a\6\u\4\h\s\y\j\n\h\0\r\s\1\x\a\8\f\d\6\t\c\a\8\j\o\c\0\x\3\2\0\6\7\o\h\u\o\e\i\a\s\7\z\x\z\r\s\c\9\n\7\5\a\s\n\k\l\1\h\j\y\2\6\w\1\c\h\g\g\1\u\r\2\j\i\p\2\1\x\r\w\o\z\y\2\v\j\7\u\6\r\o\t\v\9\t\z\8\r\t\2\r\z\f\c\2\i\b\7\o\o\p\j\r\d\7\v\8\e\u\f\s\d\x\g\p\t\i\q\h\h\t\p\l\n\8\3\c\r\m\j\w\f ]] 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.103 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.103 [2024-12-16 14:23:54.154769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:02.103 [2024-12-16 14:23:54.154864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74395 ] 00:08:02.103 [2024-12-16 14:23:54.288010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.362 [2024-12-16 14:23:54.307596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.362 [2024-12-16 14:23:54.334577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.362  [2024-12-16T14:23:54.562Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.362 00:08:02.362 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjbjk6092oddwog1dzw7o3fnfg700oayj343z0dhfs9gownfithjb3hp90ixukkl9mpdjvuc0fje3wsw1xxunc3q3zhxdzezvys2sjvlpvjeb0tuun1psmn6zo0n1g8aco6ytkmyhyvjnupkuszhd8lec4btj2tyfdyr2hpnhhhxb0p7qoj9kfyg7it17r0ynl3dah2xt6c6f5dgsee5tbqwp91c7he3c4dlnmd1uo204dv7lia5s517twj20ilu3nhwhzzd9tmsdjy67oyyr5z9ifx67ons1vtoi41gahkdufwqaqjje5gz2jcew1131m0t4g8vnzibxtu5pd2gs7nub7ukvomugvfovkk99wygu7kijot4l4x1ea0njhbpnfecov7z3co43orhkfj80g80be8044dszdpi1b4rssbfkjqo2czvvf6jfgdm6r566kzycikee4gno75albpupwko2lch4pxawlfjwutentqdc5wgiqzf4phi8mo79cfc == \z\j\b\j\k\6\0\9\2\o\d\d\w\o\g\1\d\z\w\7\o\3\f\n\f\g\7\0\0\o\a\y\j\3\4\3\z\0\d\h\f\s\9\g\o\w\n\f\i\t\h\j\b\3\h\p\9\0\i\x\u\k\k\l\9\m\p\d\j\v\u\c\0\f\j\e\3\w\s\w\1\x\x\u\n\c\3\q\3\z\h\x\d\z\e\z\v\y\s\2\s\j\v\l\p\v\j\e\b\0\t\u\u\n\1\p\s\m\n\6\z\o\0\n\1\g\8\a\c\o\6\y\t\k\m\y\h\y\v\j\n\u\p\k\u\s\z\h\d\8\l\e\c\4\b\t\j\2\t\y\f\d\y\r\2\h\p\n\h\h\h\x\b\0\p\7\q\o\j\9\k\f\y\g\7\i\t\1\7\r\0\y\n\l\3\d\a\h\2\x\t\6\c\6\f\5\d\g\s\e\e\5\t\b\q\w\p\9\1\c\7\h\e\3\c\4\d\l\n\m\d\1\u\o\2\0\4\d\v\7\l\i\a\5\s\5\1\7\t\w\j\2\0\i\l\u\3\n\h\w\h\z\z\d\9\t\m\s\d\j\y\6\7\o\y\y\r\5\z\9\i\f\x\6\7\o\n\s\1\v\t\o\i\4\1\g\a\h\k\d\u\f\w\q\a\q\j\j\e\5\g\z\2\j\c\e\w\1\1\3\1\m\0\t\4\g\8\v\n\z\i\b\x\t\u\5\p\d\2\g\s\7\n\u\b\7\u\k\v\o\m\u\g\v\f\o\v\k\k\9\9\w\y\g\u\7\k\i\j\o\t\4\l\4\x\1\e\a\0\n\j\h\b\p\n\f\e\c\o\v\7\z\3\c\o\4\3\o\r\h\k\f\j\8\0\g\8\0\b\e\8\0\4\4\d\s\z\d\p\i\1\b\4\r\s\s\b\f\k\j\q\o\2\c\z\v\v\f\6\j\f\g\d\m\6\r\5\6\6\k\z\y\c\i\k\e\e\4\g\n\o\7\5\a\l\b\p\u\p\w\k\o\2\l\c\h\4\p\x\a\w\l\f\j\w\u\t\e\n\t\q\d\c\5\w\g\i\q\z\f\4\p\h\i\8\m\o\7\9\c\f\c ]] 00:08:02.362 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.362 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.362 [2024-12-16 14:23:54.524258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:02.362 [2024-12-16 14:23:54.524355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74397 ] 00:08:02.621 [2024-12-16 14:23:54.664737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.621 [2024-12-16 14:23:54.683640] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.621 [2024-12-16 14:23:54.714852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.621  [2024-12-16T14:23:55.081Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.881 00:08:02.881 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjbjk6092oddwog1dzw7o3fnfg700oayj343z0dhfs9gownfithjb3hp90ixukkl9mpdjvuc0fje3wsw1xxunc3q3zhxdzezvys2sjvlpvjeb0tuun1psmn6zo0n1g8aco6ytkmyhyvjnupkuszhd8lec4btj2tyfdyr2hpnhhhxb0p7qoj9kfyg7it17r0ynl3dah2xt6c6f5dgsee5tbqwp91c7he3c4dlnmd1uo204dv7lia5s517twj20ilu3nhwhzzd9tmsdjy67oyyr5z9ifx67ons1vtoi41gahkdufwqaqjje5gz2jcew1131m0t4g8vnzibxtu5pd2gs7nub7ukvomugvfovkk99wygu7kijot4l4x1ea0njhbpnfecov7z3co43orhkfj80g80be8044dszdpi1b4rssbfkjqo2czvvf6jfgdm6r566kzycikee4gno75albpupwko2lch4pxawlfjwutentqdc5wgiqzf4phi8mo79cfc == \z\j\b\j\k\6\0\9\2\o\d\d\w\o\g\1\d\z\w\7\o\3\f\n\f\g\7\0\0\o\a\y\j\3\4\3\z\0\d\h\f\s\9\g\o\w\n\f\i\t\h\j\b\3\h\p\9\0\i\x\u\k\k\l\9\m\p\d\j\v\u\c\0\f\j\e\3\w\s\w\1\x\x\u\n\c\3\q\3\z\h\x\d\z\e\z\v\y\s\2\s\j\v\l\p\v\j\e\b\0\t\u\u\n\1\p\s\m\n\6\z\o\0\n\1\g\8\a\c\o\6\y\t\k\m\y\h\y\v\j\n\u\p\k\u\s\z\h\d\8\l\e\c\4\b\t\j\2\t\y\f\d\y\r\2\h\p\n\h\h\h\x\b\0\p\7\q\o\j\9\k\f\y\g\7\i\t\1\7\r\0\y\n\l\3\d\a\h\2\x\t\6\c\6\f\5\d\g\s\e\e\5\t\b\q\w\p\9\1\c\7\h\e\3\c\4\d\l\n\m\d\1\u\o\2\0\4\d\v\7\l\i\a\5\s\5\1\7\t\w\j\2\0\i\l\u\3\n\h\w\h\z\z\d\9\t\m\s\d\j\y\6\7\o\y\y\r\5\z\9\i\f\x\6\7\o\n\s\1\v\t\o\i\4\1\g\a\h\k\d\u\f\w\q\a\q\j\j\e\5\g\z\2\j\c\e\w\1\1\3\1\m\0\t\4\g\8\v\n\z\i\b\x\t\u\5\p\d\2\g\s\7\n\u\b\7\u\k\v\o\m\u\g\v\f\o\v\k\k\9\9\w\y\g\u\7\k\i\j\o\t\4\l\4\x\1\e\a\0\n\j\h\b\p\n\f\e\c\o\v\7\z\3\c\o\4\3\o\r\h\k\f\j\8\0\g\8\0\b\e\8\0\4\4\d\s\z\d\p\i\1\b\4\r\s\s\b\f\k\j\q\o\2\c\z\v\v\f\6\j\f\g\d\m\6\r\5\6\6\k\z\y\c\i\k\e\e\4\g\n\o\7\5\a\l\b\p\u\p\w\k\o\2\l\c\h\4\p\x\a\w\l\f\j\w\u\t\e\n\t\q\d\c\5\w\g\i\q\z\f\4\p\h\i\8\m\o\7\9\c\f\c ]] 00:08:02.881 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.881 14:23:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:02.881 [2024-12-16 14:23:54.911994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:02.881 [2024-12-16 14:23:54.912095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74399 ] 00:08:02.881 [2024-12-16 14:23:55.056247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.881 [2024-12-16 14:23:55.077241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.139 [2024-12-16 14:23:55.104632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.139  [2024-12-16T14:23:55.339Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.139 00:08:03.140 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjbjk6092oddwog1dzw7o3fnfg700oayj343z0dhfs9gownfithjb3hp90ixukkl9mpdjvuc0fje3wsw1xxunc3q3zhxdzezvys2sjvlpvjeb0tuun1psmn6zo0n1g8aco6ytkmyhyvjnupkuszhd8lec4btj2tyfdyr2hpnhhhxb0p7qoj9kfyg7it17r0ynl3dah2xt6c6f5dgsee5tbqwp91c7he3c4dlnmd1uo204dv7lia5s517twj20ilu3nhwhzzd9tmsdjy67oyyr5z9ifx67ons1vtoi41gahkdufwqaqjje5gz2jcew1131m0t4g8vnzibxtu5pd2gs7nub7ukvomugvfovkk99wygu7kijot4l4x1ea0njhbpnfecov7z3co43orhkfj80g80be8044dszdpi1b4rssbfkjqo2czvvf6jfgdm6r566kzycikee4gno75albpupwko2lch4pxawlfjwutentqdc5wgiqzf4phi8mo79cfc == \z\j\b\j\k\6\0\9\2\o\d\d\w\o\g\1\d\z\w\7\o\3\f\n\f\g\7\0\0\o\a\y\j\3\4\3\z\0\d\h\f\s\9\g\o\w\n\f\i\t\h\j\b\3\h\p\9\0\i\x\u\k\k\l\9\m\p\d\j\v\u\c\0\f\j\e\3\w\s\w\1\x\x\u\n\c\3\q\3\z\h\x\d\z\e\z\v\y\s\2\s\j\v\l\p\v\j\e\b\0\t\u\u\n\1\p\s\m\n\6\z\o\0\n\1\g\8\a\c\o\6\y\t\k\m\y\h\y\v\j\n\u\p\k\u\s\z\h\d\8\l\e\c\4\b\t\j\2\t\y\f\d\y\r\2\h\p\n\h\h\h\x\b\0\p\7\q\o\j\9\k\f\y\g\7\i\t\1\7\r\0\y\n\l\3\d\a\h\2\x\t\6\c\6\f\5\d\g\s\e\e\5\t\b\q\w\p\9\1\c\7\h\e\3\c\4\d\l\n\m\d\1\u\o\2\0\4\d\v\7\l\i\a\5\s\5\1\7\t\w\j\2\0\i\l\u\3\n\h\w\h\z\z\d\9\t\m\s\d\j\y\6\7\o\y\y\r\5\z\9\i\f\x\6\7\o\n\s\1\v\t\o\i\4\1\g\a\h\k\d\u\f\w\q\a\q\j\j\e\5\g\z\2\j\c\e\w\1\1\3\1\m\0\t\4\g\8\v\n\z\i\b\x\t\u\5\p\d\2\g\s\7\n\u\b\7\u\k\v\o\m\u\g\v\f\o\v\k\k\9\9\w\y\g\u\7\k\i\j\o\t\4\l\4\x\1\e\a\0\n\j\h\b\p\n\f\e\c\o\v\7\z\3\c\o\4\3\o\r\h\k\f\j\8\0\g\8\0\b\e\8\0\4\4\d\s\z\d\p\i\1\b\4\r\s\s\b\f\k\j\q\o\2\c\z\v\v\f\6\j\f\g\d\m\6\r\5\6\6\k\z\y\c\i\k\e\e\4\g\n\o\7\5\a\l\b\p\u\p\w\k\o\2\l\c\h\4\p\x\a\w\l\f\j\w\u\t\e\n\t\q\d\c\5\w\g\i\q\z\f\4\p\h\i\8\m\o\7\9\c\f\c ]] 00:08:03.140 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.140 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.140 [2024-12-16 14:23:55.295252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:03.140 [2024-12-16 14:23:55.295346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74412 ] 00:08:03.398 [2024-12-16 14:23:55.438273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.398 [2024-12-16 14:23:55.456216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.398 [2024-12-16 14:23:55.482268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.398  [2024-12-16T14:23:55.857Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.657 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zjbjk6092oddwog1dzw7o3fnfg700oayj343z0dhfs9gownfithjb3hp90ixukkl9mpdjvuc0fje3wsw1xxunc3q3zhxdzezvys2sjvlpvjeb0tuun1psmn6zo0n1g8aco6ytkmyhyvjnupkuszhd8lec4btj2tyfdyr2hpnhhhxb0p7qoj9kfyg7it17r0ynl3dah2xt6c6f5dgsee5tbqwp91c7he3c4dlnmd1uo204dv7lia5s517twj20ilu3nhwhzzd9tmsdjy67oyyr5z9ifx67ons1vtoi41gahkdufwqaqjje5gz2jcew1131m0t4g8vnzibxtu5pd2gs7nub7ukvomugvfovkk99wygu7kijot4l4x1ea0njhbpnfecov7z3co43orhkfj80g80be8044dszdpi1b4rssbfkjqo2czvvf6jfgdm6r566kzycikee4gno75albpupwko2lch4pxawlfjwutentqdc5wgiqzf4phi8mo79cfc == \z\j\b\j\k\6\0\9\2\o\d\d\w\o\g\1\d\z\w\7\o\3\f\n\f\g\7\0\0\o\a\y\j\3\4\3\z\0\d\h\f\s\9\g\o\w\n\f\i\t\h\j\b\3\h\p\9\0\i\x\u\k\k\l\9\m\p\d\j\v\u\c\0\f\j\e\3\w\s\w\1\x\x\u\n\c\3\q\3\z\h\x\d\z\e\z\v\y\s\2\s\j\v\l\p\v\j\e\b\0\t\u\u\n\1\p\s\m\n\6\z\o\0\n\1\g\8\a\c\o\6\y\t\k\m\y\h\y\v\j\n\u\p\k\u\s\z\h\d\8\l\e\c\4\b\t\j\2\t\y\f\d\y\r\2\h\p\n\h\h\h\x\b\0\p\7\q\o\j\9\k\f\y\g\7\i\t\1\7\r\0\y\n\l\3\d\a\h\2\x\t\6\c\6\f\5\d\g\s\e\e\5\t\b\q\w\p\9\1\c\7\h\e\3\c\4\d\l\n\m\d\1\u\o\2\0\4\d\v\7\l\i\a\5\s\5\1\7\t\w\j\2\0\i\l\u\3\n\h\w\h\z\z\d\9\t\m\s\d\j\y\6\7\o\y\y\r\5\z\9\i\f\x\6\7\o\n\s\1\v\t\o\i\4\1\g\a\h\k\d\u\f\w\q\a\q\j\j\e\5\g\z\2\j\c\e\w\1\1\3\1\m\0\t\4\g\8\v\n\z\i\b\x\t\u\5\p\d\2\g\s\7\n\u\b\7\u\k\v\o\m\u\g\v\f\o\v\k\k\9\9\w\y\g\u\7\k\i\j\o\t\4\l\4\x\1\e\a\0\n\j\h\b\p\n\f\e\c\o\v\7\z\3\c\o\4\3\o\r\h\k\f\j\8\0\g\8\0\b\e\8\0\4\4\d\s\z\d\p\i\1\b\4\r\s\s\b\f\k\j\q\o\2\c\z\v\v\f\6\j\f\g\d\m\6\r\5\6\6\k\z\y\c\i\k\e\e\4\g\n\o\7\5\a\l\b\p\u\p\w\k\o\2\l\c\h\4\p\x\a\w\l\f\j\w\u\t\e\n\t\q\d\c\5\w\g\i\q\z\f\4\p\h\i\8\m\o\7\9\c\f\c ]] 00:08:03.657 00:08:03.657 real 0m3.037s 00:08:03.657 user 0m1.384s 00:08:03.657 sys 0m0.702s 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.657 ************************************ 00:08:03.657 END TEST dd_flags_misc_forced_aio 00:08:03.657 ************************************ 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:03.657 00:08:03.657 real 0m14.682s 00:08:03.657 user 0m5.827s 00:08:03.657 sys 0m4.186s 00:08:03.657 14:23:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.658 ************************************ 00:08:03.658 14:23:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.658 END TEST spdk_dd_posix 00:08:03.658 ************************************ 00:08:03.658 14:23:55 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:03.658 14:23:55 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.658 14:23:55 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.658 14:23:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:03.658 ************************************ 00:08:03.658 START TEST spdk_dd_malloc 00:08:03.658 ************************************ 00:08:03.658 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:03.658 * Looking for test storage... 00:08:03.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:03.658 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:03.658 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:03.658 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.917 --rc genhtml_branch_coverage=1 00:08:03.917 --rc genhtml_function_coverage=1 00:08:03.917 --rc genhtml_legend=1 00:08:03.917 --rc geninfo_all_blocks=1 00:08:03.917 --rc geninfo_unexecuted_blocks=1 00:08:03.917 00:08:03.917 ' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.917 --rc genhtml_branch_coverage=1 00:08:03.917 --rc genhtml_function_coverage=1 00:08:03.917 --rc genhtml_legend=1 00:08:03.917 --rc geninfo_all_blocks=1 00:08:03.917 --rc geninfo_unexecuted_blocks=1 00:08:03.917 00:08:03.917 ' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.917 --rc genhtml_branch_coverage=1 00:08:03.917 --rc genhtml_function_coverage=1 00:08:03.917 --rc genhtml_legend=1 00:08:03.917 --rc geninfo_all_blocks=1 00:08:03.917 --rc geninfo_unexecuted_blocks=1 00:08:03.917 00:08:03.917 ' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:03.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.917 --rc genhtml_branch_coverage=1 00:08:03.917 --rc genhtml_function_coverage=1 00:08:03.917 --rc genhtml_legend=1 00:08:03.917 --rc geninfo_all_blocks=1 00:08:03.917 --rc geninfo_unexecuted_blocks=1 00:08:03.917 00:08:03.917 ' 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.917 14:23:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:03.918 ************************************ 00:08:03.918 START TEST dd_malloc_copy 00:08:03.918 ************************************ 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.918 14:23:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.918 [2024-12-16 14:23:56.002670] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:03.918 [2024-12-16 14:23:56.003244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74483 ] 00:08:03.918 { 00:08:03.918 "subsystems": [ 00:08:03.918 { 00:08:03.918 "subsystem": "bdev", 00:08:03.918 "config": [ 00:08:03.918 { 00:08:03.918 "params": { 00:08:03.918 "block_size": 512, 00:08:03.918 "num_blocks": 1048576, 00:08:03.918 "name": "malloc0" 00:08:03.918 }, 00:08:03.918 "method": "bdev_malloc_create" 00:08:03.918 }, 00:08:03.918 { 00:08:03.918 "params": { 00:08:03.918 "block_size": 512, 00:08:03.918 "num_blocks": 1048576, 00:08:03.918 "name": "malloc1" 00:08:03.918 }, 00:08:03.918 "method": "bdev_malloc_create" 00:08:03.918 }, 00:08:03.918 { 00:08:03.918 "method": "bdev_wait_for_examine" 00:08:03.918 } 00:08:03.918 ] 00:08:03.918 } 00:08:03.918 ] 00:08:03.918 } 00:08:04.177 [2024-12-16 14:23:56.145936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.177 [2024-12-16 14:23:56.163971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.177 [2024-12-16 14:23:56.195214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.555  [2024-12-16T14:23:58.692Z] Copying: 236/512 [MB] (236 MBps) [2024-12-16T14:23:58.692Z] Copying: 470/512 [MB] (233 MBps) [2024-12-16T14:23:58.951Z] Copying: 512/512 [MB] (average 235 MBps) 00:08:06.751 00:08:06.751 14:23:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:06.751 14:23:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:06.751 14:23:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:06.751 14:23:58 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:06.751 [2024-12-16 14:23:58.896991] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:06.751 [2024-12-16 14:23:58.897093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74525 ] 00:08:06.751 { 00:08:06.751 "subsystems": [ 00:08:06.751 { 00:08:06.751 "subsystem": "bdev", 00:08:06.751 "config": [ 00:08:06.751 { 00:08:06.751 "params": { 00:08:06.751 "block_size": 512, 00:08:06.751 "num_blocks": 1048576, 00:08:06.751 "name": "malloc0" 00:08:06.751 }, 00:08:06.751 "method": "bdev_malloc_create" 00:08:06.751 }, 00:08:06.751 { 00:08:06.751 "params": { 00:08:06.751 "block_size": 512, 00:08:06.751 "num_blocks": 1048576, 00:08:06.751 "name": "malloc1" 00:08:06.751 }, 00:08:06.751 "method": "bdev_malloc_create" 00:08:06.751 }, 00:08:06.751 { 00:08:06.751 "method": "bdev_wait_for_examine" 00:08:06.751 } 00:08:06.751 ] 00:08:06.751 } 00:08:06.751 ] 00:08:06.751 } 00:08:07.010 [2024-12-16 14:23:59.035404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.010 [2024-12-16 14:23:59.054177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.010 [2024-12-16 14:23:59.081917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.387  [2024-12-16T14:24:01.525Z] Copying: 231/512 [MB] (231 MBps) [2024-12-16T14:24:01.784Z] Copying: 450/512 [MB] (218 MBps) [2024-12-16T14:24:02.043Z] Copying: 512/512 [MB] (average 225 MBps) 00:08:09.843 00:08:09.843 00:08:09.843 real 0m5.883s 00:08:09.843 user 0m5.274s 00:08:09.843 sys 0m0.467s 00:08:09.843 14:24:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.843 14:24:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.843 ************************************ 00:08:09.843 END TEST dd_malloc_copy 00:08:09.843 ************************************ 00:08:09.843 00:08:09.843 real 0m6.151s 00:08:09.843 user 0m5.433s 00:08:09.844 sys 0m0.580s 00:08:09.844 14:24:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.844 ************************************ 00:08:09.844 END TEST spdk_dd_malloc 00:08:09.844 ************************************ 00:08:09.844 14:24:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:09.844 14:24:01 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:09.844 14:24:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:09.844 14:24:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.844 14:24:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:09.844 ************************************ 00:08:09.844 START TEST spdk_dd_bdev_to_bdev 00:08:09.844 ************************************ 00:08:09.844 14:24:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:09.844 * Looking for test storage... 00:08:09.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:09.844 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.844 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.844 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.103 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.104 --rc genhtml_branch_coverage=1 00:08:10.104 --rc genhtml_function_coverage=1 00:08:10.104 --rc genhtml_legend=1 00:08:10.104 --rc geninfo_all_blocks=1 00:08:10.104 --rc geninfo_unexecuted_blocks=1 00:08:10.104 00:08:10.104 ' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.104 --rc genhtml_branch_coverage=1 00:08:10.104 --rc genhtml_function_coverage=1 00:08:10.104 --rc genhtml_legend=1 00:08:10.104 --rc geninfo_all_blocks=1 00:08:10.104 --rc geninfo_unexecuted_blocks=1 00:08:10.104 00:08:10.104 ' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.104 --rc genhtml_branch_coverage=1 00:08:10.104 --rc genhtml_function_coverage=1 00:08:10.104 --rc genhtml_legend=1 00:08:10.104 --rc geninfo_all_blocks=1 00:08:10.104 --rc geninfo_unexecuted_blocks=1 00:08:10.104 00:08:10.104 ' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.104 --rc genhtml_branch_coverage=1 00:08:10.104 --rc genhtml_function_coverage=1 00:08:10.104 --rc genhtml_legend=1 00:08:10.104 --rc geninfo_all_blocks=1 00:08:10.104 --rc geninfo_unexecuted_blocks=1 00:08:10.104 00:08:10.104 ' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.104 ************************************ 00:08:10.104 START TEST dd_inflate_file 00:08:10.104 ************************************ 00:08:10.104 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:10.104 [2024-12-16 14:24:02.178636] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:10.104 [2024-12-16 14:24:02.178728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74632 ] 00:08:10.363 [2024-12-16 14:24:02.325811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.363 [2024-12-16 14:24:02.350707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.363 [2024-12-16 14:24:02.382630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.363  [2024-12-16T14:24:02.563Z] Copying: 64/64 [MB] (average 1641 MBps) 00:08:10.363 00:08:10.363 00:08:10.363 real 0m0.435s 00:08:10.363 user 0m0.236s 00:08:10.363 sys 0m0.218s 00:08:10.363 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.363 ************************************ 00:08:10.363 END TEST dd_inflate_file 00:08:10.363 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:10.363 ************************************ 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.623 ************************************ 00:08:10.623 START TEST dd_copy_to_out_bdev 00:08:10.623 ************************************ 00:08:10.623 14:24:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:10.623 [2024-12-16 14:24:02.661988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:10.623 [2024-12-16 14:24:02.662076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74671 ] 00:08:10.623 { 00:08:10.623 "subsystems": [ 00:08:10.623 { 00:08:10.623 "subsystem": "bdev", 00:08:10.623 "config": [ 00:08:10.623 { 00:08:10.623 "params": { 00:08:10.623 "trtype": "pcie", 00:08:10.623 "traddr": "0000:00:10.0", 00:08:10.623 "name": "Nvme0" 00:08:10.623 }, 00:08:10.623 "method": "bdev_nvme_attach_controller" 00:08:10.623 }, 00:08:10.623 { 00:08:10.623 "params": { 00:08:10.623 "trtype": "pcie", 00:08:10.623 "traddr": "0000:00:11.0", 00:08:10.623 "name": "Nvme1" 00:08:10.623 }, 00:08:10.623 "method": "bdev_nvme_attach_controller" 00:08:10.623 }, 00:08:10.623 { 00:08:10.623 "method": "bdev_wait_for_examine" 00:08:10.623 } 00:08:10.623 ] 00:08:10.623 } 00:08:10.623 ] 00:08:10.623 } 00:08:10.623 [2024-12-16 14:24:02.803038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.882 [2024-12-16 14:24:02.825139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.882 [2024-12-16 14:24:02.856362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.819  [2024-12-16T14:24:04.277Z] Copying: 53/64 [MB] (53 MBps) [2024-12-16T14:24:04.537Z] Copying: 64/64 [MB] (average 53 MBps) 00:08:12.337 00:08:12.337 00:08:12.337 real 0m1.731s 00:08:12.337 user 0m1.563s 00:08:12.337 sys 0m1.417s 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.337 ************************************ 00:08:12.337 END TEST dd_copy_to_out_bdev 00:08:12.337 ************************************ 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.337 ************************************ 00:08:12.337 START TEST dd_offset_magic 00:08:12.337 ************************************ 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:12.337 14:24:04 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:12.337 [2024-12-16 14:24:04.458809] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:12.337 [2024-12-16 14:24:04.460070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74715 ] 00:08:12.337 { 00:08:12.337 "subsystems": [ 00:08:12.337 { 00:08:12.337 "subsystem": "bdev", 00:08:12.337 "config": [ 00:08:12.337 { 00:08:12.337 "params": { 00:08:12.337 "trtype": "pcie", 00:08:12.337 "traddr": "0000:00:10.0", 00:08:12.337 "name": "Nvme0" 00:08:12.337 }, 00:08:12.337 "method": "bdev_nvme_attach_controller" 00:08:12.337 }, 00:08:12.337 { 00:08:12.337 "params": { 00:08:12.337 "trtype": "pcie", 00:08:12.337 "traddr": "0000:00:11.0", 00:08:12.337 "name": "Nvme1" 00:08:12.337 }, 00:08:12.337 "method": "bdev_nvme_attach_controller" 00:08:12.337 }, 00:08:12.337 { 00:08:12.337 "method": "bdev_wait_for_examine" 00:08:12.337 } 00:08:12.337 ] 00:08:12.337 } 00:08:12.337 ] 00:08:12.337 } 00:08:12.596 [2024-12-16 14:24:04.604691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.596 [2024-12-16 14:24:04.623192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.596 [2024-12-16 14:24:04.652117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.855  [2024-12-16T14:24:05.055Z] Copying: 65/65 [MB] (average 928 MBps) 00:08:12.855 00:08:12.855 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:12.855 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:12.855 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:12.855 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:13.114 [2024-12-16 14:24:05.077575] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:13.114 [2024-12-16 14:24:05.077698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74725 ] 00:08:13.114 { 00:08:13.114 "subsystems": [ 00:08:13.114 { 00:08:13.114 "subsystem": "bdev", 00:08:13.114 "config": [ 00:08:13.114 { 00:08:13.114 "params": { 00:08:13.114 "trtype": "pcie", 00:08:13.114 "traddr": "0000:00:10.0", 00:08:13.114 "name": "Nvme0" 00:08:13.114 }, 00:08:13.114 "method": "bdev_nvme_attach_controller" 00:08:13.114 }, 00:08:13.114 { 00:08:13.114 "params": { 00:08:13.114 "trtype": "pcie", 00:08:13.114 "traddr": "0000:00:11.0", 00:08:13.114 "name": "Nvme1" 00:08:13.114 }, 00:08:13.114 "method": "bdev_nvme_attach_controller" 00:08:13.114 }, 00:08:13.114 { 00:08:13.114 "method": "bdev_wait_for_examine" 00:08:13.114 } 00:08:13.114 ] 00:08:13.114 } 00:08:13.114 ] 00:08:13.114 } 00:08:13.114 [2024-12-16 14:24:05.217153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.114 [2024-12-16 14:24:05.236975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.114 [2024-12-16 14:24:05.267286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.373  [2024-12-16T14:24:05.573Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.373 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:13.373 14:24:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:13.632 [2024-12-16 14:24:05.582597] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:13.632 [2024-12-16 14:24:05.582738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74747 ] 00:08:13.632 { 00:08:13.632 "subsystems": [ 00:08:13.632 { 00:08:13.632 "subsystem": "bdev", 00:08:13.632 "config": [ 00:08:13.632 { 00:08:13.632 "params": { 00:08:13.632 "trtype": "pcie", 00:08:13.632 "traddr": "0000:00:10.0", 00:08:13.632 "name": "Nvme0" 00:08:13.632 }, 00:08:13.632 "method": "bdev_nvme_attach_controller" 00:08:13.632 }, 00:08:13.632 { 00:08:13.632 "params": { 00:08:13.632 "trtype": "pcie", 00:08:13.632 "traddr": "0000:00:11.0", 00:08:13.632 "name": "Nvme1" 00:08:13.632 }, 00:08:13.632 "method": "bdev_nvme_attach_controller" 00:08:13.632 }, 00:08:13.632 { 00:08:13.632 "method": "bdev_wait_for_examine" 00:08:13.632 } 00:08:13.632 ] 00:08:13.632 } 00:08:13.632 ] 00:08:13.632 } 00:08:13.632 [2024-12-16 14:24:05.725318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.632 [2024-12-16 14:24:05.747639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.632 [2024-12-16 14:24:05.780289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.891  [2024-12-16T14:24:06.351Z] Copying: 65/65 [MB] (average 1065 MBps) 00:08:14.151 00:08:14.151 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:14.151 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:14.151 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:14.151 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.151 [2024-12-16 14:24:06.231308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:14.151 [2024-12-16 14:24:06.231450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74756 ] 00:08:14.151 { 00:08:14.151 "subsystems": [ 00:08:14.151 { 00:08:14.151 "subsystem": "bdev", 00:08:14.151 "config": [ 00:08:14.151 { 00:08:14.151 "params": { 00:08:14.151 "trtype": "pcie", 00:08:14.151 "traddr": "0000:00:10.0", 00:08:14.151 "name": "Nvme0" 00:08:14.151 }, 00:08:14.151 "method": "bdev_nvme_attach_controller" 00:08:14.151 }, 00:08:14.151 { 00:08:14.151 "params": { 00:08:14.151 "trtype": "pcie", 00:08:14.151 "traddr": "0000:00:11.0", 00:08:14.151 "name": "Nvme1" 00:08:14.151 }, 00:08:14.151 "method": "bdev_nvme_attach_controller" 00:08:14.151 }, 00:08:14.151 { 00:08:14.151 "method": "bdev_wait_for_examine" 00:08:14.151 } 00:08:14.151 ] 00:08:14.151 } 00:08:14.151 ] 00:08:14.151 } 00:08:14.410 [2024-12-16 14:24:06.379405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.410 [2024-12-16 14:24:06.400408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.410 [2024-12-16 14:24:06.428620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.410  [2024-12-16T14:24:06.869Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:14.669 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:14.669 00:08:14.669 real 0m2.293s 00:08:14.669 user 0m1.676s 00:08:14.669 sys 0m0.605s 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.669 ************************************ 00:08:14.669 END TEST dd_offset_magic 00:08:14.669 ************************************ 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:14.669 14:24:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.669 [2024-12-16 14:24:06.791161] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:14.669 [2024-12-16 14:24:06.791742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74793 ] 00:08:14.669 { 00:08:14.669 "subsystems": [ 00:08:14.669 { 00:08:14.669 "subsystem": "bdev", 00:08:14.669 "config": [ 00:08:14.669 { 00:08:14.669 "params": { 00:08:14.669 "trtype": "pcie", 00:08:14.669 "traddr": "0000:00:10.0", 00:08:14.669 "name": "Nvme0" 00:08:14.669 }, 00:08:14.669 "method": "bdev_nvme_attach_controller" 00:08:14.669 }, 00:08:14.669 { 00:08:14.669 "params": { 00:08:14.669 "trtype": "pcie", 00:08:14.669 "traddr": "0000:00:11.0", 00:08:14.669 "name": "Nvme1" 00:08:14.669 }, 00:08:14.669 "method": "bdev_nvme_attach_controller" 00:08:14.669 }, 00:08:14.669 { 00:08:14.669 "method": "bdev_wait_for_examine" 00:08:14.669 } 00:08:14.669 ] 00:08:14.669 } 00:08:14.669 ] 00:08:14.669 } 00:08:14.928 [2024-12-16 14:24:06.935954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.928 [2024-12-16 14:24:06.954535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.928 [2024-12-16 14:24:06.984676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.187  [2024-12-16T14:24:07.387Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:15.187 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:15.187 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:15.188 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:15.188 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:15.188 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:15.188 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:15.188 [2024-12-16 14:24:07.309884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:15.188 [2024-12-16 14:24:07.309984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74803 ] 00:08:15.188 { 00:08:15.188 "subsystems": [ 00:08:15.188 { 00:08:15.188 "subsystem": "bdev", 00:08:15.188 "config": [ 00:08:15.188 { 00:08:15.188 "params": { 00:08:15.188 "trtype": "pcie", 00:08:15.188 "traddr": "0000:00:10.0", 00:08:15.188 "name": "Nvme0" 00:08:15.188 }, 00:08:15.188 "method": "bdev_nvme_attach_controller" 00:08:15.188 }, 00:08:15.188 { 00:08:15.188 "params": { 00:08:15.188 "trtype": "pcie", 00:08:15.188 "traddr": "0000:00:11.0", 00:08:15.188 "name": "Nvme1" 00:08:15.188 }, 00:08:15.188 "method": "bdev_nvme_attach_controller" 00:08:15.188 }, 00:08:15.188 { 00:08:15.188 "method": "bdev_wait_for_examine" 00:08:15.188 } 00:08:15.188 ] 00:08:15.188 } 00:08:15.188 ] 00:08:15.188 } 00:08:15.447 [2024-12-16 14:24:07.457614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.447 [2024-12-16 14:24:07.477261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.447 [2024-12-16 14:24:07.505814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.712  [2024-12-16T14:24:07.912Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:15.712 00:08:15.712 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:15.712 00:08:15.712 real 0m5.872s 00:08:15.712 user 0m4.397s 00:08:15.712 sys 0m2.758s 00:08:15.712 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.712 ************************************ 00:08:15.712 END TEST spdk_dd_bdev_to_bdev 00:08:15.712 ************************************ 00:08:15.712 14:24:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:15.712 14:24:07 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:15.712 14:24:07 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:15.712 14:24:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.712 14:24:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.712 14:24:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.712 ************************************ 00:08:15.712 START TEST spdk_dd_uring 00:08:15.712 ************************************ 00:08:15.712 14:24:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:16.004 * Looking for test storage... 00:08:16.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:16.004 14:24:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.004 14:24:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.004 14:24:07 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.004 --rc genhtml_branch_coverage=1 00:08:16.004 --rc genhtml_function_coverage=1 00:08:16.004 --rc genhtml_legend=1 00:08:16.004 --rc geninfo_all_blocks=1 00:08:16.004 --rc geninfo_unexecuted_blocks=1 00:08:16.004 00:08:16.004 ' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.004 --rc genhtml_branch_coverage=1 00:08:16.004 --rc genhtml_function_coverage=1 00:08:16.004 --rc genhtml_legend=1 00:08:16.004 --rc geninfo_all_blocks=1 00:08:16.004 --rc geninfo_unexecuted_blocks=1 00:08:16.004 00:08:16.004 ' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.004 --rc genhtml_branch_coverage=1 00:08:16.004 --rc genhtml_function_coverage=1 00:08:16.004 --rc genhtml_legend=1 00:08:16.004 --rc geninfo_all_blocks=1 00:08:16.004 --rc geninfo_unexecuted_blocks=1 00:08:16.004 00:08:16.004 ' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.004 --rc genhtml_branch_coverage=1 00:08:16.004 --rc genhtml_function_coverage=1 00:08:16.004 --rc genhtml_legend=1 00:08:16.004 --rc geninfo_all_blocks=1 00:08:16.004 --rc geninfo_unexecuted_blocks=1 00:08:16.004 00:08:16.004 ' 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.004 14:24:08 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:16.005 ************************************ 00:08:16.005 START TEST dd_uring_copy 00:08:16.005 ************************************ 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=7vhp8ohx0t306tf0aew6y69e01tjaekgt942zrl1pmoww59b788mps4ysw1gdq52loc2crob5jp5lrkq3p8yzgpf80b0lve7i8vw1qbtsrzqxu84dca89h8p5qx401yrcl4e02st537teygbyriqzqdf3rooe5m2e0tzafppeb55yr2w3il8aqlw6qtjxo4s5t3vin8eig17n60e2z0e5r7vvs9lavb2owge7cm7mbdqwdxfieh6ecy4ijr65tftmb9bkisv2d5xz1rnbozup2p29uiwix0rpm4ti26dmuicvw3jqi67cb2mckn43f8i9w7jd9w797rego1ks39g2oud09m30fd0seyp63pd9o4lb85lkvty1d409idwo7yuhx634mutd8fi523jhoj7j65ii0t1zu4vgl13vex694f85uaj7cjlsc0psjmb3mzecs512lvx8p90lyd707w1zo6vj47isjfymmlg4lwmw9ftyrpyfvs1ymymuc4abj8eo4pcjrgnnpkvg3b9a00vzja88t4rvqmalp7eck7u0ekrmqqrde4cr84ymrhpnw8tkefbhglme81te0b2w2eqrp10m67ve0zc1j3ee88jfvm9i2ehsyvlc2xm32x9wxyiltmgmqhssnf3fc6ys8957k77lo5ptv8a2lqcwn55ubogpwkcesuocz1mqvm16efokek3dkafsijlaamc393snmdsjfb0d1d37n08e9xq1frjybhbsuk5p3kx58gmea2it1uquiz704mr1q6z6dcjstbsiqysq2pww7oizsf1bm7xcmsphc9ps0hfks9prkx0berdvucoqqye5779ww6d4ne0f8vx3xfv6307jz8jd5d9ldggyek0yk3phuw1kcgbat8teuimp5l50tmdcm0mx8h4arhvg5tljd81nhn1rphrmo1jioavotnolfjenn6db38b50fti4m6rf7gmb54a175baw3dg7raqosokon2pzv7mbtj452b69hexs0t9x2 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 7vhp8ohx0t306tf0aew6y69e01tjaekgt942zrl1pmoww59b788mps4ysw1gdq52loc2crob5jp5lrkq3p8yzgpf80b0lve7i8vw1qbtsrzqxu84dca89h8p5qx401yrcl4e02st537teygbyriqzqdf3rooe5m2e0tzafppeb55yr2w3il8aqlw6qtjxo4s5t3vin8eig17n60e2z0e5r7vvs9lavb2owge7cm7mbdqwdxfieh6ecy4ijr65tftmb9bkisv2d5xz1rnbozup2p29uiwix0rpm4ti26dmuicvw3jqi67cb2mckn43f8i9w7jd9w797rego1ks39g2oud09m30fd0seyp63pd9o4lb85lkvty1d409idwo7yuhx634mutd8fi523jhoj7j65ii0t1zu4vgl13vex694f85uaj7cjlsc0psjmb3mzecs512lvx8p90lyd707w1zo6vj47isjfymmlg4lwmw9ftyrpyfvs1ymymuc4abj8eo4pcjrgnnpkvg3b9a00vzja88t4rvqmalp7eck7u0ekrmqqrde4cr84ymrhpnw8tkefbhglme81te0b2w2eqrp10m67ve0zc1j3ee88jfvm9i2ehsyvlc2xm32x9wxyiltmgmqhssnf3fc6ys8957k77lo5ptv8a2lqcwn55ubogpwkcesuocz1mqvm16efokek3dkafsijlaamc393snmdsjfb0d1d37n08e9xq1frjybhbsuk5p3kx58gmea2it1uquiz704mr1q6z6dcjstbsiqysq2pww7oizsf1bm7xcmsphc9ps0hfks9prkx0berdvucoqqye5779ww6d4ne0f8vx3xfv6307jz8jd5d9ldggyek0yk3phuw1kcgbat8teuimp5l50tmdcm0mx8h4arhvg5tljd81nhn1rphrmo1jioavotnolfjenn6db38b50fti4m6rf7gmb54a175baw3dg7raqosokon2pzv7mbtj452b69hexs0t9x2 00:08:16.005 14:24:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:16.005 [2024-12-16 14:24:08.174370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:16.005 [2024-12-16 14:24:08.174670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74881 ] 00:08:16.287 [2024-12-16 14:24:08.322124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.287 [2024-12-16 14:24:08.341577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.287 [2024-12-16 14:24:08.368052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.854  [2024-12-16T14:24:09.054Z] Copying: 511/511 [MB] (average 1560 MBps) 00:08:16.854 00:08:17.113 14:24:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:17.113 14:24:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:17.113 14:24:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:17.113 14:24:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 [2024-12-16 14:24:09.104971] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:17.113 [2024-12-16 14:24:09.105245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:08:17.113 { 00:08:17.113 "subsystems": [ 00:08:17.113 { 00:08:17.113 "subsystem": "bdev", 00:08:17.113 "config": [ 00:08:17.113 { 00:08:17.113 "params": { 00:08:17.113 "block_size": 512, 00:08:17.113 "num_blocks": 1048576, 00:08:17.113 "name": "malloc0" 00:08:17.113 }, 00:08:17.113 "method": "bdev_malloc_create" 00:08:17.113 }, 00:08:17.113 { 00:08:17.113 "params": { 00:08:17.113 "filename": "/dev/zram1", 00:08:17.113 "name": "uring0" 00:08:17.113 }, 00:08:17.113 "method": "bdev_uring_create" 00:08:17.113 }, 00:08:17.113 { 00:08:17.113 "method": "bdev_wait_for_examine" 00:08:17.113 } 00:08:17.113 ] 00:08:17.113 } 00:08:17.113 ] 00:08:17.113 } 00:08:17.113 [2024-12-16 14:24:09.249857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.113 [2024-12-16 14:24:09.271143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.113 [2024-12-16 14:24:09.302246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.490  [2024-12-16T14:24:11.625Z] Copying: 240/512 [MB] (240 MBps) [2024-12-16T14:24:11.625Z] Copying: 494/512 [MB] (253 MBps) [2024-12-16T14:24:11.884Z] Copying: 512/512 [MB] (average 247 MBps) 00:08:19.684 00:08:19.684 14:24:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:19.684 14:24:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:19.684 14:24:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.685 14:24:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.685 [2024-12-16 14:24:11.779953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:19.685 [2024-12-16 14:24:11.780029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74939 ] 00:08:19.685 { 00:08:19.685 "subsystems": [ 00:08:19.685 { 00:08:19.685 "subsystem": "bdev", 00:08:19.685 "config": [ 00:08:19.685 { 00:08:19.685 "params": { 00:08:19.685 "block_size": 512, 00:08:19.685 "num_blocks": 1048576, 00:08:19.685 "name": "malloc0" 00:08:19.685 }, 00:08:19.685 "method": "bdev_malloc_create" 00:08:19.685 }, 00:08:19.685 { 00:08:19.685 "params": { 00:08:19.685 "filename": "/dev/zram1", 00:08:19.685 "name": "uring0" 00:08:19.685 }, 00:08:19.685 "method": "bdev_uring_create" 00:08:19.685 }, 00:08:19.685 { 00:08:19.685 "method": "bdev_wait_for_examine" 00:08:19.685 } 00:08:19.685 ] 00:08:19.685 } 00:08:19.685 ] 00:08:19.685 } 00:08:19.944 [2024-12-16 14:24:11.925532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.944 [2024-12-16 14:24:11.947927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.944 [2024-12-16 14:24:11.980499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.322  [2024-12-16T14:24:14.457Z] Copying: 180/512 [MB] (180 MBps) [2024-12-16T14:24:15.025Z] Copying: 371/512 [MB] (191 MBps) [2024-12-16T14:24:15.287Z] Copying: 512/512 [MB] (average 181 MBps) 00:08:23.087 00:08:23.087 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:23.087 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 7vhp8ohx0t306tf0aew6y69e01tjaekgt942zrl1pmoww59b788mps4ysw1gdq52loc2crob5jp5lrkq3p8yzgpf80b0lve7i8vw1qbtsrzqxu84dca89h8p5qx401yrcl4e02st537teygbyriqzqdf3rooe5m2e0tzafppeb55yr2w3il8aqlw6qtjxo4s5t3vin8eig17n60e2z0e5r7vvs9lavb2owge7cm7mbdqwdxfieh6ecy4ijr65tftmb9bkisv2d5xz1rnbozup2p29uiwix0rpm4ti26dmuicvw3jqi67cb2mckn43f8i9w7jd9w797rego1ks39g2oud09m30fd0seyp63pd9o4lb85lkvty1d409idwo7yuhx634mutd8fi523jhoj7j65ii0t1zu4vgl13vex694f85uaj7cjlsc0psjmb3mzecs512lvx8p90lyd707w1zo6vj47isjfymmlg4lwmw9ftyrpyfvs1ymymuc4abj8eo4pcjrgnnpkvg3b9a00vzja88t4rvqmalp7eck7u0ekrmqqrde4cr84ymrhpnw8tkefbhglme81te0b2w2eqrp10m67ve0zc1j3ee88jfvm9i2ehsyvlc2xm32x9wxyiltmgmqhssnf3fc6ys8957k77lo5ptv8a2lqcwn55ubogpwkcesuocz1mqvm16efokek3dkafsijlaamc393snmdsjfb0d1d37n08e9xq1frjybhbsuk5p3kx58gmea2it1uquiz704mr1q6z6dcjstbsiqysq2pww7oizsf1bm7xcmsphc9ps0hfks9prkx0berdvucoqqye5779ww6d4ne0f8vx3xfv6307jz8jd5d9ldggyek0yk3phuw1kcgbat8teuimp5l50tmdcm0mx8h4arhvg5tljd81nhn1rphrmo1jioavotnolfjenn6db38b50fti4m6rf7gmb54a175baw3dg7raqosokon2pzv7mbtj452b69hexs0t9x2 == \7\v\h\p\8\o\h\x\0\t\3\0\6\t\f\0\a\e\w\6\y\6\9\e\0\1\t\j\a\e\k\g\t\9\4\2\z\r\l\1\p\m\o\w\w\5\9\b\7\8\8\m\p\s\4\y\s\w\1\g\d\q\5\2\l\o\c\2\c\r\o\b\5\j\p\5\l\r\k\q\3\p\8\y\z\g\p\f\8\0\b\0\l\v\e\7\i\8\v\w\1\q\b\t\s\r\z\q\x\u\8\4\d\c\a\8\9\h\8\p\5\q\x\4\0\1\y\r\c\l\4\e\0\2\s\t\5\3\7\t\e\y\g\b\y\r\i\q\z\q\d\f\3\r\o\o\e\5\m\2\e\0\t\z\a\f\p\p\e\b\5\5\y\r\2\w\3\i\l\8\a\q\l\w\6\q\t\j\x\o\4\s\5\t\3\v\i\n\8\e\i\g\1\7\n\6\0\e\2\z\0\e\5\r\7\v\v\s\9\l\a\v\b\2\o\w\g\e\7\c\m\7\m\b\d\q\w\d\x\f\i\e\h\6\e\c\y\4\i\j\r\6\5\t\f\t\m\b\9\b\k\i\s\v\2\d\5\x\z\1\r\n\b\o\z\u\p\2\p\2\9\u\i\w\i\x\0\r\p\m\4\t\i\2\6\d\m\u\i\c\v\w\3\j\q\i\6\7\c\b\2\m\c\k\n\4\3\f\8\i\9\w\7\j\d\9\w\7\9\7\r\e\g\o\1\k\s\3\9\g\2\o\u\d\0\9\m\3\0\f\d\0\s\e\y\p\6\3\p\d\9\o\4\l\b\8\5\l\k\v\t\y\1\d\4\0\9\i\d\w\o\7\y\u\h\x\6\3\4\m\u\t\d\8\f\i\5\2\3\j\h\o\j\7\j\6\5\i\i\0\t\1\z\u\4\v\g\l\1\3\v\e\x\6\9\4\f\8\5\u\a\j\7\c\j\l\s\c\0\p\s\j\m\b\3\m\z\e\c\s\5\1\2\l\v\x\8\p\9\0\l\y\d\7\0\7\w\1\z\o\6\v\j\4\7\i\s\j\f\y\m\m\l\g\4\l\w\m\w\9\f\t\y\r\p\y\f\v\s\1\y\m\y\m\u\c\4\a\b\j\8\e\o\4\p\c\j\r\g\n\n\p\k\v\g\3\b\9\a\0\0\v\z\j\a\8\8\t\4\r\v\q\m\a\l\p\7\e\c\k\7\u\0\e\k\r\m\q\q\r\d\e\4\c\r\8\4\y\m\r\h\p\n\w\8\t\k\e\f\b\h\g\l\m\e\8\1\t\e\0\b\2\w\2\e\q\r\p\1\0\m\6\7\v\e\0\z\c\1\j\3\e\e\8\8\j\f\v\m\9\i\2\e\h\s\y\v\l\c\2\x\m\3\2\x\9\w\x\y\i\l\t\m\g\m\q\h\s\s\n\f\3\f\c\6\y\s\8\9\5\7\k\7\7\l\o\5\p\t\v\8\a\2\l\q\c\w\n\5\5\u\b\o\g\p\w\k\c\e\s\u\o\c\z\1\m\q\v\m\1\6\e\f\o\k\e\k\3\d\k\a\f\s\i\j\l\a\a\m\c\3\9\3\s\n\m\d\s\j\f\b\0\d\1\d\3\7\n\0\8\e\9\x\q\1\f\r\j\y\b\h\b\s\u\k\5\p\3\k\x\5\8\g\m\e\a\2\i\t\1\u\q\u\i\z\7\0\4\m\r\1\q\6\z\6\d\c\j\s\t\b\s\i\q\y\s\q\2\p\w\w\7\o\i\z\s\f\1\b\m\7\x\c\m\s\p\h\c\9\p\s\0\h\f\k\s\9\p\r\k\x\0\b\e\r\d\v\u\c\o\q\q\y\e\5\7\7\9\w\w\6\d\4\n\e\0\f\8\v\x\3\x\f\v\6\3\0\7\j\z\8\j\d\5\d\9\l\d\g\g\y\e\k\0\y\k\3\p\h\u\w\1\k\c\g\b\a\t\8\t\e\u\i\m\p\5\l\5\0\t\m\d\c\m\0\m\x\8\h\4\a\r\h\v\g\5\t\l\j\d\8\1\n\h\n\1\r\p\h\r\m\o\1\j\i\o\a\v\o\t\n\o\l\f\j\e\n\n\6\d\b\3\8\b\5\0\f\t\i\4\m\6\r\f\7\g\m\b\5\4\a\1\7\5\b\a\w\3\d\g\7\r\a\q\o\s\o\k\o\n\2\p\z\v\7\m\b\t\j\4\5\2\b\6\9\h\e\x\s\0\t\9\x\2 ]] 00:08:23.087 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:23.087 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 7vhp8ohx0t306tf0aew6y69e01tjaekgt942zrl1pmoww59b788mps4ysw1gdq52loc2crob5jp5lrkq3p8yzgpf80b0lve7i8vw1qbtsrzqxu84dca89h8p5qx401yrcl4e02st537teygbyriqzqdf3rooe5m2e0tzafppeb55yr2w3il8aqlw6qtjxo4s5t3vin8eig17n60e2z0e5r7vvs9lavb2owge7cm7mbdqwdxfieh6ecy4ijr65tftmb9bkisv2d5xz1rnbozup2p29uiwix0rpm4ti26dmuicvw3jqi67cb2mckn43f8i9w7jd9w797rego1ks39g2oud09m30fd0seyp63pd9o4lb85lkvty1d409idwo7yuhx634mutd8fi523jhoj7j65ii0t1zu4vgl13vex694f85uaj7cjlsc0psjmb3mzecs512lvx8p90lyd707w1zo6vj47isjfymmlg4lwmw9ftyrpyfvs1ymymuc4abj8eo4pcjrgnnpkvg3b9a00vzja88t4rvqmalp7eck7u0ekrmqqrde4cr84ymrhpnw8tkefbhglme81te0b2w2eqrp10m67ve0zc1j3ee88jfvm9i2ehsyvlc2xm32x9wxyiltmgmqhssnf3fc6ys8957k77lo5ptv8a2lqcwn55ubogpwkcesuocz1mqvm16efokek3dkafsijlaamc393snmdsjfb0d1d37n08e9xq1frjybhbsuk5p3kx58gmea2it1uquiz704mr1q6z6dcjstbsiqysq2pww7oizsf1bm7xcmsphc9ps0hfks9prkx0berdvucoqqye5779ww6d4ne0f8vx3xfv6307jz8jd5d9ldggyek0yk3phuw1kcgbat8teuimp5l50tmdcm0mx8h4arhvg5tljd81nhn1rphrmo1jioavotnolfjenn6db38b50fti4m6rf7gmb54a175baw3dg7raqosokon2pzv7mbtj452b69hexs0t9x2 == \7\v\h\p\8\o\h\x\0\t\3\0\6\t\f\0\a\e\w\6\y\6\9\e\0\1\t\j\a\e\k\g\t\9\4\2\z\r\l\1\p\m\o\w\w\5\9\b\7\8\8\m\p\s\4\y\s\w\1\g\d\q\5\2\l\o\c\2\c\r\o\b\5\j\p\5\l\r\k\q\3\p\8\y\z\g\p\f\8\0\b\0\l\v\e\7\i\8\v\w\1\q\b\t\s\r\z\q\x\u\8\4\d\c\a\8\9\h\8\p\5\q\x\4\0\1\y\r\c\l\4\e\0\2\s\t\5\3\7\t\e\y\g\b\y\r\i\q\z\q\d\f\3\r\o\o\e\5\m\2\e\0\t\z\a\f\p\p\e\b\5\5\y\r\2\w\3\i\l\8\a\q\l\w\6\q\t\j\x\o\4\s\5\t\3\v\i\n\8\e\i\g\1\7\n\6\0\e\2\z\0\e\5\r\7\v\v\s\9\l\a\v\b\2\o\w\g\e\7\c\m\7\m\b\d\q\w\d\x\f\i\e\h\6\e\c\y\4\i\j\r\6\5\t\f\t\m\b\9\b\k\i\s\v\2\d\5\x\z\1\r\n\b\o\z\u\p\2\p\2\9\u\i\w\i\x\0\r\p\m\4\t\i\2\6\d\m\u\i\c\v\w\3\j\q\i\6\7\c\b\2\m\c\k\n\4\3\f\8\i\9\w\7\j\d\9\w\7\9\7\r\e\g\o\1\k\s\3\9\g\2\o\u\d\0\9\m\3\0\f\d\0\s\e\y\p\6\3\p\d\9\o\4\l\b\8\5\l\k\v\t\y\1\d\4\0\9\i\d\w\o\7\y\u\h\x\6\3\4\m\u\t\d\8\f\i\5\2\3\j\h\o\j\7\j\6\5\i\i\0\t\1\z\u\4\v\g\l\1\3\v\e\x\6\9\4\f\8\5\u\a\j\7\c\j\l\s\c\0\p\s\j\m\b\3\m\z\e\c\s\5\1\2\l\v\x\8\p\9\0\l\y\d\7\0\7\w\1\z\o\6\v\j\4\7\i\s\j\f\y\m\m\l\g\4\l\w\m\w\9\f\t\y\r\p\y\f\v\s\1\y\m\y\m\u\c\4\a\b\j\8\e\o\4\p\c\j\r\g\n\n\p\k\v\g\3\b\9\a\0\0\v\z\j\a\8\8\t\4\r\v\q\m\a\l\p\7\e\c\k\7\u\0\e\k\r\m\q\q\r\d\e\4\c\r\8\4\y\m\r\h\p\n\w\8\t\k\e\f\b\h\g\l\m\e\8\1\t\e\0\b\2\w\2\e\q\r\p\1\0\m\6\7\v\e\0\z\c\1\j\3\e\e\8\8\j\f\v\m\9\i\2\e\h\s\y\v\l\c\2\x\m\3\2\x\9\w\x\y\i\l\t\m\g\m\q\h\s\s\n\f\3\f\c\6\y\s\8\9\5\7\k\7\7\l\o\5\p\t\v\8\a\2\l\q\c\w\n\5\5\u\b\o\g\p\w\k\c\e\s\u\o\c\z\1\m\q\v\m\1\6\e\f\o\k\e\k\3\d\k\a\f\s\i\j\l\a\a\m\c\3\9\3\s\n\m\d\s\j\f\b\0\d\1\d\3\7\n\0\8\e\9\x\q\1\f\r\j\y\b\h\b\s\u\k\5\p\3\k\x\5\8\g\m\e\a\2\i\t\1\u\q\u\i\z\7\0\4\m\r\1\q\6\z\6\d\c\j\s\t\b\s\i\q\y\s\q\2\p\w\w\7\o\i\z\s\f\1\b\m\7\x\c\m\s\p\h\c\9\p\s\0\h\f\k\s\9\p\r\k\x\0\b\e\r\d\v\u\c\o\q\q\y\e\5\7\7\9\w\w\6\d\4\n\e\0\f\8\v\x\3\x\f\v\6\3\0\7\j\z\8\j\d\5\d\9\l\d\g\g\y\e\k\0\y\k\3\p\h\u\w\1\k\c\g\b\a\t\8\t\e\u\i\m\p\5\l\5\0\t\m\d\c\m\0\m\x\8\h\4\a\r\h\v\g\5\t\l\j\d\8\1\n\h\n\1\r\p\h\r\m\o\1\j\i\o\a\v\o\t\n\o\l\f\j\e\n\n\6\d\b\3\8\b\5\0\f\t\i\4\m\6\r\f\7\g\m\b\5\4\a\1\7\5\b\a\w\3\d\g\7\r\a\q\o\s\o\k\o\n\2\p\z\v\7\m\b\t\j\4\5\2\b\6\9\h\e\x\s\0\t\9\x\2 ]] 00:08:23.087 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:23.659 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:23.659 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:23.659 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:23.659 14:24:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.659 [2024-12-16 14:24:15.610970] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:23.659 [2024-12-16 14:24:15.611081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74999 ] 00:08:23.659 { 00:08:23.659 "subsystems": [ 00:08:23.659 { 00:08:23.659 "subsystem": "bdev", 00:08:23.659 "config": [ 00:08:23.659 { 00:08:23.659 "params": { 00:08:23.659 "block_size": 512, 00:08:23.659 "num_blocks": 1048576, 00:08:23.659 "name": "malloc0" 00:08:23.659 }, 00:08:23.659 "method": "bdev_malloc_create" 00:08:23.659 }, 00:08:23.659 { 00:08:23.659 "params": { 00:08:23.659 "filename": "/dev/zram1", 00:08:23.659 "name": "uring0" 00:08:23.659 }, 00:08:23.659 "method": "bdev_uring_create" 00:08:23.659 }, 00:08:23.659 { 00:08:23.659 "method": "bdev_wait_for_examine" 00:08:23.659 } 00:08:23.659 ] 00:08:23.659 } 00:08:23.659 ] 00:08:23.659 } 00:08:23.659 [2024-12-16 14:24:15.750154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.659 [2024-12-16 14:24:15.768552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.659 [2024-12-16 14:24:15.795853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.037  [2024-12-16T14:24:18.173Z] Copying: 173/512 [MB] (173 MBps) [2024-12-16T14:24:19.109Z] Copying: 327/512 [MB] (153 MBps) [2024-12-16T14:24:19.368Z] Copying: 478/512 [MB] (150 MBps) [2024-12-16T14:24:19.627Z] Copying: 512/512 [MB] (average 157 MBps) 00:08:27.427 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:27.427 14:24:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 [2024-12-16 14:24:19.460592] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:27.427 [2024-12-16 14:24:19.460681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75049 ] 00:08:27.427 { 00:08:27.427 "subsystems": [ 00:08:27.427 { 00:08:27.427 "subsystem": "bdev", 00:08:27.427 "config": [ 00:08:27.427 { 00:08:27.427 "params": { 00:08:27.427 "block_size": 512, 00:08:27.427 "num_blocks": 1048576, 00:08:27.427 "name": "malloc0" 00:08:27.427 }, 00:08:27.427 "method": "bdev_malloc_create" 00:08:27.427 }, 00:08:27.427 { 00:08:27.427 "params": { 00:08:27.427 "filename": "/dev/zram1", 00:08:27.427 "name": "uring0" 00:08:27.427 }, 00:08:27.427 "method": "bdev_uring_create" 00:08:27.427 }, 00:08:27.427 { 00:08:27.427 "params": { 00:08:27.427 "name": "uring0" 00:08:27.427 }, 00:08:27.427 "method": "bdev_uring_delete" 00:08:27.427 }, 00:08:27.427 { 00:08:27.427 "method": "bdev_wait_for_examine" 00:08:27.427 } 00:08:27.427 ] 00:08:27.427 } 00:08:27.427 ] 00:08:27.427 } 00:08:27.427 [2024-12-16 14:24:19.605807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.687 [2024-12-16 14:24:19.629350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.687 [2024-12-16 14:24:19.662200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.687  [2024-12-16T14:24:20.145Z] Copying: 0/0 [B] (average 0 Bps) 00:08:27.945 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.945 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.946 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.946 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.946 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:27.946 [2024-12-16 14:24:20.080307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:27.946 [2024-12-16 14:24:20.080415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75073 ] 00:08:27.946 { 00:08:27.946 "subsystems": [ 00:08:27.946 { 00:08:27.946 "subsystem": "bdev", 00:08:27.946 "config": [ 00:08:27.946 { 00:08:27.946 "params": { 00:08:27.946 "block_size": 512, 00:08:27.946 "num_blocks": 1048576, 00:08:27.946 "name": "malloc0" 00:08:27.946 }, 00:08:27.946 "method": "bdev_malloc_create" 00:08:27.946 }, 00:08:27.946 { 00:08:27.946 "params": { 00:08:27.946 "filename": "/dev/zram1", 00:08:27.946 "name": "uring0" 00:08:27.946 }, 00:08:27.946 "method": "bdev_uring_create" 00:08:27.946 }, 00:08:27.946 { 00:08:27.946 "params": { 00:08:27.946 "name": "uring0" 00:08:27.946 }, 00:08:27.946 "method": "bdev_uring_delete" 00:08:27.946 }, 00:08:27.946 { 00:08:27.946 "method": "bdev_wait_for_examine" 00:08:27.946 } 00:08:27.946 ] 00:08:27.946 } 00:08:27.946 ] 00:08:27.946 } 00:08:28.204 [2024-12-16 14:24:20.229153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.204 [2024-12-16 14:24:20.250993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.204 [2024-12-16 14:24:20.284060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.463 [2024-12-16 14:24:20.414899] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:28.463 [2024-12-16 14:24:20.414977] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:28.463 [2024-12-16 14:24:20.414989] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:08:28.463 [2024-12-16 14:24:20.414999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.463 [2024-12-16 14:24:20.604756] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:28.722 00:08:28.722 real 0m12.794s 00:08:28.722 user 0m8.716s 00:08:28.722 sys 0m10.852s 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.722 14:24:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.722 ************************************ 00:08:28.722 END TEST dd_uring_copy 00:08:28.722 ************************************ 00:08:28.981 ************************************ 00:08:28.981 END TEST spdk_dd_uring 00:08:28.981 ************************************ 00:08:28.981 00:08:28.981 real 0m13.082s 00:08:28.981 user 0m8.888s 00:08:28.981 sys 0m10.971s 00:08:28.981 14:24:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.981 14:24:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:28.981 14:24:20 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:28.981 14:24:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.981 14:24:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.981 14:24:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:28.981 ************************************ 00:08:28.981 START TEST spdk_dd_sparse 00:08:28.981 ************************************ 00:08:28.981 14:24:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:28.981 * Looking for test storage... 00:08:28.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.981 --rc genhtml_branch_coverage=1 00:08:28.981 --rc genhtml_function_coverage=1 00:08:28.981 --rc genhtml_legend=1 00:08:28.981 --rc geninfo_all_blocks=1 00:08:28.981 --rc geninfo_unexecuted_blocks=1 00:08:28.981 00:08:28.981 ' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.981 --rc genhtml_branch_coverage=1 00:08:28.981 --rc genhtml_function_coverage=1 00:08:28.981 --rc genhtml_legend=1 00:08:28.981 --rc geninfo_all_blocks=1 00:08:28.981 --rc geninfo_unexecuted_blocks=1 00:08:28.981 00:08:28.981 ' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.981 --rc genhtml_branch_coverage=1 00:08:28.981 --rc genhtml_function_coverage=1 00:08:28.981 --rc genhtml_legend=1 00:08:28.981 --rc geninfo_all_blocks=1 00:08:28.981 --rc geninfo_unexecuted_blocks=1 00:08:28.981 00:08:28.981 ' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.981 --rc genhtml_branch_coverage=1 00:08:28.981 --rc genhtml_function_coverage=1 00:08:28.981 --rc genhtml_legend=1 00:08:28.981 --rc geninfo_all_blocks=1 00:08:28.981 --rc geninfo_unexecuted_blocks=1 00:08:28.981 00:08:28.981 ' 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.981 14:24:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:28.982 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:29.241 1+0 records in 00:08:29.241 1+0 records out 00:08:29.241 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00671525 s, 625 MB/s 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:29.241 1+0 records in 00:08:29.241 1+0 records out 00:08:29.241 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0065872 s, 637 MB/s 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:29.241 1+0 records in 00:08:29.241 1+0 records out 00:08:29.241 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00381897 s, 1.1 GB/s 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:29.241 ************************************ 00:08:29.241 START TEST dd_sparse_file_to_file 00:08:29.241 ************************************ 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:29.241 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:29.241 { 00:08:29.241 "subsystems": [ 00:08:29.241 { 00:08:29.241 "subsystem": "bdev", 00:08:29.241 "config": [ 00:08:29.241 { 00:08:29.241 "params": { 00:08:29.241 "block_size": 4096, 00:08:29.241 "filename": "dd_sparse_aio_disk", 00:08:29.241 "name": "dd_aio" 00:08:29.241 }, 00:08:29.241 "method": "bdev_aio_create" 00:08:29.241 }, 00:08:29.241 { 00:08:29.241 "params": { 00:08:29.241 "lvs_name": "dd_lvstore", 00:08:29.241 "bdev_name": "dd_aio" 00:08:29.241 }, 00:08:29.241 "method": "bdev_lvol_create_lvstore" 00:08:29.241 }, 00:08:29.241 { 00:08:29.241 "method": "bdev_wait_for_examine" 00:08:29.241 } 00:08:29.241 ] 00:08:29.241 } 00:08:29.241 ] 00:08:29.241 } 00:08:29.241 [2024-12-16 14:24:21.275206] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:29.241 [2024-12-16 14:24:21.275782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75171 ] 00:08:29.241 [2024-12-16 14:24:21.427315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.529 [2024-12-16 14:24:21.455866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.529 [2024-12-16 14:24:21.495646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.529  [2024-12-16T14:24:21.729Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:29.529 00:08:29.529 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:29.529 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:29.530 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:29.530 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:29.795 ************************************ 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:29.795 00:08:29.795 real 0m0.522s 00:08:29.795 user 0m0.290s 00:08:29.795 sys 0m0.270s 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:29.795 END TEST dd_sparse_file_to_file 00:08:29.795 ************************************ 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:29.795 ************************************ 00:08:29.795 START TEST dd_sparse_file_to_bdev 00:08:29.795 ************************************ 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:29.795 14:24:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:29.795 [2024-12-16 14:24:21.855859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:29.795 [2024-12-16 14:24:21.855957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75215 ] 00:08:29.795 { 00:08:29.795 "subsystems": [ 00:08:29.795 { 00:08:29.795 "subsystem": "bdev", 00:08:29.795 "config": [ 00:08:29.795 { 00:08:29.795 "params": { 00:08:29.795 "block_size": 4096, 00:08:29.795 "filename": "dd_sparse_aio_disk", 00:08:29.795 "name": "dd_aio" 00:08:29.795 }, 00:08:29.795 "method": "bdev_aio_create" 00:08:29.795 }, 00:08:29.795 { 00:08:29.795 "params": { 00:08:29.795 "lvs_name": "dd_lvstore", 00:08:29.795 "lvol_name": "dd_lvol", 00:08:29.795 "size_in_mib": 36, 00:08:29.795 "thin_provision": true 00:08:29.795 }, 00:08:29.795 "method": "bdev_lvol_create" 00:08:29.795 }, 00:08:29.795 { 00:08:29.795 "method": "bdev_wait_for_examine" 00:08:29.795 } 00:08:29.795 ] 00:08:29.795 } 00:08:29.795 ] 00:08:29.795 } 00:08:30.054 [2024-12-16 14:24:22.006005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.054 [2024-12-16 14:24:22.035397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.054 [2024-12-16 14:24:22.075878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.054  [2024-12-16T14:24:22.513Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:30.313 00:08:30.313 00:08:30.313 real 0m0.515s 00:08:30.313 user 0m0.332s 00:08:30.313 sys 0m0.270s 00:08:30.313 ************************************ 00:08:30.313 END TEST dd_sparse_file_to_bdev 00:08:30.313 ************************************ 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.314 ************************************ 00:08:30.314 START TEST dd_sparse_bdev_to_file 00:08:30.314 ************************************ 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:30.314 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.314 { 00:08:30.314 "subsystems": [ 00:08:30.314 { 00:08:30.314 "subsystem": "bdev", 00:08:30.314 "config": [ 00:08:30.314 { 00:08:30.314 "params": { 00:08:30.314 "block_size": 4096, 00:08:30.314 "filename": "dd_sparse_aio_disk", 00:08:30.314 "name": "dd_aio" 00:08:30.314 }, 00:08:30.314 "method": "bdev_aio_create" 00:08:30.314 }, 00:08:30.314 { 00:08:30.314 "method": "bdev_wait_for_examine" 00:08:30.314 } 00:08:30.314 ] 00:08:30.314 } 00:08:30.314 ] 00:08:30.314 } 00:08:30.314 [2024-12-16 14:24:22.411931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:30.314 [2024-12-16 14:24:22.412011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75247 ] 00:08:30.573 [2024-12-16 14:24:22.559372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.573 [2024-12-16 14:24:22.583136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.573 [2024-12-16 14:24:22.617025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.573  [2024-12-16T14:24:23.031Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:30.832 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:30.832 00:08:30.832 real 0m0.486s 00:08:30.832 user 0m0.285s 00:08:30.832 sys 0m0.251s 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.832 ************************************ 00:08:30.832 END TEST dd_sparse_bdev_to_file 00:08:30.832 ************************************ 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:30.832 00:08:30.832 real 0m1.920s 00:08:30.832 user 0m1.091s 00:08:30.832 sys 0m0.999s 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.832 14:24:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.832 ************************************ 00:08:30.832 END TEST spdk_dd_sparse 00:08:30.832 ************************************ 00:08:30.832 14:24:22 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:30.832 14:24:22 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.832 14:24:22 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.832 14:24:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.832 ************************************ 00:08:30.832 START TEST spdk_dd_negative 00:08:30.832 ************************************ 00:08:30.832 14:24:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:30.832 * Looking for test storage... 00:08:31.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.092 --rc genhtml_branch_coverage=1 00:08:31.092 --rc genhtml_function_coverage=1 00:08:31.092 --rc genhtml_legend=1 00:08:31.092 --rc geninfo_all_blocks=1 00:08:31.092 --rc geninfo_unexecuted_blocks=1 00:08:31.092 00:08:31.092 ' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.092 --rc genhtml_branch_coverage=1 00:08:31.092 --rc genhtml_function_coverage=1 00:08:31.092 --rc genhtml_legend=1 00:08:31.092 --rc geninfo_all_blocks=1 00:08:31.092 --rc geninfo_unexecuted_blocks=1 00:08:31.092 00:08:31.092 ' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.092 --rc genhtml_branch_coverage=1 00:08:31.092 --rc genhtml_function_coverage=1 00:08:31.092 --rc genhtml_legend=1 00:08:31.092 --rc geninfo_all_blocks=1 00:08:31.092 --rc geninfo_unexecuted_blocks=1 00:08:31.092 00:08:31.092 ' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.092 --rc genhtml_branch_coverage=1 00:08:31.092 --rc genhtml_function_coverage=1 00:08:31.092 --rc genhtml_legend=1 00:08:31.092 --rc geninfo_all_blocks=1 00:08:31.092 --rc geninfo_unexecuted_blocks=1 00:08:31.092 00:08:31.092 ' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.092 ************************************ 00:08:31.092 START TEST dd_invalid_arguments 00:08:31.092 ************************************ 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.092 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:31.092 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:31.092 00:08:31.092 CPU options: 00:08:31.092 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:31.092 (like [0,1,10]) 00:08:31.092 --lcores lcore to CPU mapping list. The list is in the format: 00:08:31.092 [<,lcores[@CPUs]>...] 00:08:31.092 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:31.092 Within the group, '-' is used for range separator, 00:08:31.092 ',' is used for single number separator. 00:08:31.092 '( )' can be omitted for single element group, 00:08:31.092 '@' can be omitted if cpus and lcores have the same value 00:08:31.092 --disable-cpumask-locks Disable CPU core lock files. 00:08:31.092 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:31.092 pollers in the app support interrupt mode) 00:08:31.092 -p, --main-core main (primary) core for DPDK 00:08:31.092 00:08:31.092 Configuration options: 00:08:31.093 -c, --config, --json JSON config file 00:08:31.093 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:31.093 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:31.093 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:31.093 --rpcs-allowed comma-separated list of permitted RPCS 00:08:31.093 --json-ignore-init-errors don't exit on invalid config entry 00:08:31.093 00:08:31.093 Memory options: 00:08:31.093 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:31.093 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:31.093 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:31.093 -R, --huge-unlink unlink huge files after initialization 00:08:31.093 -n, --mem-channels number of memory channels used for DPDK 00:08:31.093 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:31.093 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:31.093 --no-huge run without using hugepages 00:08:31.093 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:31.093 -i, --shm-id shared memory ID (optional) 00:08:31.093 -g, --single-file-segments force creating just one hugetlbfs file 00:08:31.093 00:08:31.093 PCI options: 00:08:31.093 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:31.093 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:31.093 -u, --no-pci disable PCI access 00:08:31.093 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:31.093 00:08:31.093 Log options: 00:08:31.093 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:31.093 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:31.093 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:31.093 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:31.093 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:31.093 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:31.093 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:31.093 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:31.093 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:31.093 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:31.093 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:31.093 --silence-noticelog disable notice level logging to stderr 00:08:31.093 00:08:31.093 Trace options: 00:08:31.093 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:31.093 setting 0 to disable trace (default 32768) 00:08:31.093 Tracepoints vary in size and can use more than one trace entry. 00:08:31.093 -e, --tpoint-group [:] 00:08:31.093 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:31.093 [2024-12-16 14:24:23.266617] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:08:31.093 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:31.093 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:31.093 bdev_raid, scheduler, all). 00:08:31.093 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:31.093 a tracepoint group. First tpoint inside a group can be enabled by 00:08:31.093 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:31.093 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:31.093 in /include/spdk_internal/trace_defs.h 00:08:31.093 00:08:31.093 Other options: 00:08:31.093 -h, --help show this usage 00:08:31.093 -v, --version print SPDK version 00:08:31.093 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:31.093 --env-context Opaque context for use of the env implementation 00:08:31.093 00:08:31.093 Application specific: 00:08:31.093 [--------- DD Options ---------] 00:08:31.093 --if Input file. Must specify either --if or --ib. 00:08:31.093 --ib Input bdev. Must specifier either --if or --ib 00:08:31.093 --of Output file. Must specify either --of or --ob. 00:08:31.093 --ob Output bdev. Must specify either --of or --ob. 00:08:31.093 --iflag Input file flags. 00:08:31.093 --oflag Output file flags. 00:08:31.093 --bs I/O unit size (default: 4096) 00:08:31.093 --qd Queue depth (default: 2) 00:08:31.093 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:31.093 --skip Skip this many I/O units at start of input. (default: 0) 00:08:31.093 --seek Skip this many I/O units at start of output. (default: 0) 00:08:31.093 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:31.093 --sparse Enable hole skipping in input target 00:08:31.093 Available iflag and oflag values: 00:08:31.093 append - append mode 00:08:31.093 direct - use direct I/O for data 00:08:31.093 directory - fail unless a directory 00:08:31.093 dsync - use synchronized I/O for data 00:08:31.093 noatime - do not update access time 00:08:31.093 noctty - do not assign controlling terminal from file 00:08:31.093 nofollow - do not follow symlinks 00:08:31.093 nonblock - use non-blocking I/O 00:08:31.093 sync - use synchronized I/O for data and metadata 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.093 00:08:31.093 real 0m0.078s 00:08:31.093 user 0m0.053s 00:08:31.093 sys 0m0.024s 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.093 14:24:23 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:31.093 ************************************ 00:08:31.093 END TEST dd_invalid_arguments 00:08:31.093 ************************************ 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.352 ************************************ 00:08:31.352 START TEST dd_double_input 00:08:31.352 ************************************ 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:31.352 [2024-12-16 14:24:23.392093] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.352 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.352 00:08:31.353 real 0m0.073s 00:08:31.353 user 0m0.046s 00:08:31.353 sys 0m0.026s 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 ************************************ 00:08:31.353 END TEST dd_double_input 00:08:31.353 ************************************ 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 ************************************ 00:08:31.353 START TEST dd_double_output 00:08:31.353 ************************************ 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:31.353 [2024-12-16 14:24:23.518091] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.353 00:08:31.353 real 0m0.076s 00:08:31.353 user 0m0.044s 00:08:31.353 sys 0m0.031s 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.353 14:24:23 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:31.353 ************************************ 00:08:31.353 END TEST dd_double_output 00:08:31.353 ************************************ 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.611 ************************************ 00:08:31.611 START TEST dd_no_input 00:08:31.611 ************************************ 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:31.611 [2024-12-16 14:24:23.637302] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.611 00:08:31.611 real 0m0.068s 00:08:31.611 user 0m0.044s 00:08:31.611 sys 0m0.023s 00:08:31.611 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:31.612 ************************************ 00:08:31.612 END TEST dd_no_input 00:08:31.612 ************************************ 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.612 ************************************ 00:08:31.612 START TEST dd_no_output 00:08:31.612 ************************************ 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.612 [2024-12-16 14:24:23.754384] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.612 00:08:31.612 real 0m0.072s 00:08:31.612 user 0m0.051s 00:08:31.612 sys 0m0.019s 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.612 14:24:23 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:31.612 ************************************ 00:08:31.612 END TEST dd_no_output 00:08:31.612 ************************************ 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.871 ************************************ 00:08:31.871 START TEST dd_wrong_blocksize 00:08:31.871 ************************************ 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:31.871 [2024-12-16 14:24:23.876259] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.871 00:08:31.871 real 0m0.073s 00:08:31.871 user 0m0.045s 00:08:31.871 sys 0m0.028s 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:31.871 ************************************ 00:08:31.871 END TEST dd_wrong_blocksize 00:08:31.871 ************************************ 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.871 ************************************ 00:08:31.871 START TEST dd_smaller_blocksize 00:08:31.871 ************************************ 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.871 14:24:23 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:31.871 [2024-12-16 14:24:24.004174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:31.871 [2024-12-16 14:24:24.004710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75474 ] 00:08:32.130 [2024-12-16 14:24:24.155549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.130 [2024-12-16 14:24:24.181636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.130 [2024-12-16 14:24:24.219086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.130 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:32.130 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:32.130 [2024-12-16 14:24:24.240809] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:32.130 [2024-12-16 14:24:24.240841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.130 [2024-12-16 14:24:24.320227] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.389 00:08:32.389 real 0m0.442s 00:08:32.389 user 0m0.232s 00:08:32.389 sys 0m0.105s 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:32.389 ************************************ 00:08:32.389 END TEST dd_smaller_blocksize 00:08:32.389 ************************************ 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.389 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.390 ************************************ 00:08:32.390 START TEST dd_invalid_count 00:08:32.390 ************************************ 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:32.390 [2024-12-16 14:24:24.489412] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.390 00:08:32.390 real 0m0.073s 00:08:32.390 user 0m0.051s 00:08:32.390 sys 0m0.021s 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.390 ************************************ 00:08:32.390 END TEST dd_invalid_count 00:08:32.390 ************************************ 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.390 ************************************ 00:08:32.390 START TEST dd_invalid_oflag 00:08:32.390 ************************************ 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.390 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:32.649 [2024-12-16 14:24:24.619161] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.649 00:08:32.649 real 0m0.074s 00:08:32.649 user 0m0.040s 00:08:32.649 sys 0m0.034s 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:32.649 ************************************ 00:08:32.649 END TEST dd_invalid_oflag 00:08:32.649 ************************************ 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.649 ************************************ 00:08:32.649 START TEST dd_invalid_iflag 00:08:32.649 ************************************ 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.649 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:32.650 [2024-12-16 14:24:24.747961] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.650 00:08:32.650 real 0m0.085s 00:08:32.650 user 0m0.055s 00:08:32.650 sys 0m0.030s 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:32.650 ************************************ 00:08:32.650 END TEST dd_invalid_iflag 00:08:32.650 ************************************ 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.650 ************************************ 00:08:32.650 START TEST dd_unknown_flag 00:08:32.650 ************************************ 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.650 14:24:24 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:32.909 [2024-12-16 14:24:24.878589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:32.909 [2024-12-16 14:24:24.878685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75566 ] 00:08:32.909 [2024-12-16 14:24:25.027147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.909 [2024-12-16 14:24:25.055709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.909 [2024-12-16 14:24:25.096266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.168 [2024-12-16 14:24:25.116879] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:08:33.168 [2024-12-16 14:24:25.116948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.168 [2024-12-16 14:24:25.116999] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:08:33.168 [2024-12-16 14:24:25.117011] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.168 [2024-12-16 14:24:25.117302] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:33.168 [2024-12-16 14:24:25.117320] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.168 [2024-12-16 14:24:25.117368] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:33.168 [2024-12-16 14:24:25.117379] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:33.168 [2024-12-16 14:24:25.190280] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.168 00:08:33.168 real 0m0.434s 00:08:33.168 user 0m0.222s 00:08:33.168 sys 0m0.119s 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:33.168 ************************************ 00:08:33.168 END TEST dd_unknown_flag 00:08:33.168 ************************************ 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.168 ************************************ 00:08:33.168 START TEST dd_invalid_json 00:08:33.168 ************************************ 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.168 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:33.427 [2024-12-16 14:24:25.366784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:33.427 [2024-12-16 14:24:25.366881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75589 ] 00:08:33.427 [2024-12-16 14:24:25.515639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.427 [2024-12-16 14:24:25.541671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.427 [2024-12-16 14:24:25.541780] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:33.427 [2024-12-16 14:24:25.541795] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:33.427 [2024-12-16 14:24:25.541803] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.427 [2024-12-16 14:24:25.541856] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.427 00:08:33.427 real 0m0.286s 00:08:33.427 user 0m0.128s 00:08:33.427 sys 0m0.058s 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.427 ************************************ 00:08:33.427 END TEST dd_invalid_json 00:08:33.427 ************************************ 00:08:33.427 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.686 ************************************ 00:08:33.686 START TEST dd_invalid_seek 00:08:33.686 ************************************ 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.686 14:24:25 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:33.686 [2024-12-16 14:24:25.707544] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:33.686 [2024-12-16 14:24:25.707634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75618 ] 00:08:33.686 { 00:08:33.686 "subsystems": [ 00:08:33.686 { 00:08:33.686 "subsystem": "bdev", 00:08:33.686 "config": [ 00:08:33.686 { 00:08:33.686 "params": { 00:08:33.686 "block_size": 512, 00:08:33.686 "num_blocks": 512, 00:08:33.686 "name": "malloc0" 00:08:33.686 }, 00:08:33.686 "method": "bdev_malloc_create" 00:08:33.686 }, 00:08:33.686 { 00:08:33.686 "params": { 00:08:33.686 "block_size": 512, 00:08:33.686 "num_blocks": 512, 00:08:33.686 "name": "malloc1" 00:08:33.686 }, 00:08:33.686 "method": "bdev_malloc_create" 00:08:33.686 }, 00:08:33.686 { 00:08:33.686 "method": "bdev_wait_for_examine" 00:08:33.686 } 00:08:33.686 ] 00:08:33.686 } 00:08:33.686 ] 00:08:33.686 } 00:08:33.686 [2024-12-16 14:24:25.856407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.946 [2024-12-16 14:24:25.888227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.946 [2024-12-16 14:24:25.931061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.946 [2024-12-16 14:24:25.979074] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:33.946 [2024-12-16 14:24:25.979139] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.946 [2024-12-16 14:24:26.052591] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.946 00:08:33.946 real 0m0.462s 00:08:33.946 user 0m0.291s 00:08:33.946 sys 0m0.134s 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.946 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:33.946 ************************************ 00:08:33.946 END TEST dd_invalid_seek 00:08:33.946 ************************************ 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.206 ************************************ 00:08:34.206 START TEST dd_invalid_skip 00:08:34.206 ************************************ 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.206 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.206 [2024-12-16 14:24:26.230223] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:34.206 [2024-12-16 14:24:26.230319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75652 ] 00:08:34.206 { 00:08:34.206 "subsystems": [ 00:08:34.206 { 00:08:34.206 "subsystem": "bdev", 00:08:34.206 "config": [ 00:08:34.206 { 00:08:34.206 "params": { 00:08:34.206 "block_size": 512, 00:08:34.206 "num_blocks": 512, 00:08:34.206 "name": "malloc0" 00:08:34.206 }, 00:08:34.206 "method": "bdev_malloc_create" 00:08:34.206 }, 00:08:34.206 { 00:08:34.206 "params": { 00:08:34.206 "block_size": 512, 00:08:34.206 "num_blocks": 512, 00:08:34.206 "name": "malloc1" 00:08:34.206 }, 00:08:34.206 "method": "bdev_malloc_create" 00:08:34.206 }, 00:08:34.206 { 00:08:34.206 "method": "bdev_wait_for_examine" 00:08:34.206 } 00:08:34.206 ] 00:08:34.206 } 00:08:34.206 ] 00:08:34.206 } 00:08:34.206 [2024-12-16 14:24:26.379942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.465 [2024-12-16 14:24:26.410605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.465 [2024-12-16 14:24:26.452964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.465 [2024-12-16 14:24:26.501895] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:34.465 [2024-12-16 14:24:26.501962] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.465 [2024-12-16 14:24:26.579801] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.465 00:08:34.465 real 0m0.478s 00:08:34.465 user 0m0.301s 00:08:34.465 sys 0m0.141s 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.465 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:34.465 ************************************ 00:08:34.465 END TEST dd_invalid_skip 00:08:34.465 ************************************ 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.725 ************************************ 00:08:34.725 START TEST dd_invalid_input_count 00:08:34.725 ************************************ 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.725 14:24:26 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:34.725 [2024-12-16 14:24:26.755962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:34.725 [2024-12-16 14:24:26.756821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75685 ] 00:08:34.725 { 00:08:34.725 "subsystems": [ 00:08:34.725 { 00:08:34.725 "subsystem": "bdev", 00:08:34.725 "config": [ 00:08:34.725 { 00:08:34.725 "params": { 00:08:34.725 "block_size": 512, 00:08:34.725 "num_blocks": 512, 00:08:34.725 "name": "malloc0" 00:08:34.725 }, 00:08:34.725 "method": "bdev_malloc_create" 00:08:34.725 }, 00:08:34.725 { 00:08:34.725 "params": { 00:08:34.725 "block_size": 512, 00:08:34.725 "num_blocks": 512, 00:08:34.725 "name": "malloc1" 00:08:34.725 }, 00:08:34.725 "method": "bdev_malloc_create" 00:08:34.725 }, 00:08:34.725 { 00:08:34.725 "method": "bdev_wait_for_examine" 00:08:34.725 } 00:08:34.725 ] 00:08:34.725 } 00:08:34.726 ] 00:08:34.726 } 00:08:34.726 [2024-12-16 14:24:26.907105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.985 [2024-12-16 14:24:26.940247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.985 [2024-12-16 14:24:26.983995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.985 [2024-12-16 14:24:27.033181] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:34.985 [2024-12-16 14:24:27.033274] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.985 [2024-12-16 14:24:27.106636] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.985 00:08:34.985 real 0m0.462s 00:08:34.985 user 0m0.290s 00:08:34.985 sys 0m0.135s 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.985 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.985 ************************************ 00:08:34.985 END TEST dd_invalid_input_count 00:08:34.985 ************************************ 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 ************************************ 00:08:35.244 START TEST dd_invalid_output_count 00:08:35.244 ************************************ 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.244 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.244 [2024-12-16 14:24:27.272825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:35.245 [2024-12-16 14:24:27.272942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75719 ] 00:08:35.245 { 00:08:35.245 "subsystems": [ 00:08:35.245 { 00:08:35.245 "subsystem": "bdev", 00:08:35.245 "config": [ 00:08:35.245 { 00:08:35.245 "params": { 00:08:35.245 "block_size": 512, 00:08:35.245 "num_blocks": 512, 00:08:35.245 "name": "malloc0" 00:08:35.245 }, 00:08:35.245 "method": "bdev_malloc_create" 00:08:35.245 }, 00:08:35.245 { 00:08:35.245 "method": "bdev_wait_for_examine" 00:08:35.245 } 00:08:35.245 ] 00:08:35.245 } 00:08:35.245 ] 00:08:35.245 } 00:08:35.245 [2024-12-16 14:24:27.423647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.502 [2024-12-16 14:24:27.449417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.502 [2024-12-16 14:24:27.486655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.502 [2024-12-16 14:24:27.525748] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:35.502 [2024-12-16 14:24:27.525825] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.502 [2024-12-16 14:24:27.604371] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.502 00:08:35.502 real 0m0.446s 00:08:35.502 user 0m0.289s 00:08:35.502 sys 0m0.113s 00:08:35.502 ************************************ 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.502 14:24:27 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:35.502 END TEST dd_invalid_output_count 00:08:35.502 ************************************ 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.761 ************************************ 00:08:35.761 START TEST dd_bs_not_multiple 00:08:35.761 ************************************ 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.761 14:24:27 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:35.761 [2024-12-16 14:24:27.774311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:35.761 [2024-12-16 14:24:27.774401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75750 ] 00:08:35.761 { 00:08:35.761 "subsystems": [ 00:08:35.761 { 00:08:35.761 "subsystem": "bdev", 00:08:35.761 "config": [ 00:08:35.761 { 00:08:35.761 "params": { 00:08:35.761 "block_size": 512, 00:08:35.761 "num_blocks": 512, 00:08:35.761 "name": "malloc0" 00:08:35.761 }, 00:08:35.761 "method": "bdev_malloc_create" 00:08:35.761 }, 00:08:35.761 { 00:08:35.761 "params": { 00:08:35.761 "block_size": 512, 00:08:35.761 "num_blocks": 512, 00:08:35.761 "name": "malloc1" 00:08:35.761 }, 00:08:35.761 "method": "bdev_malloc_create" 00:08:35.761 }, 00:08:35.761 { 00:08:35.761 "method": "bdev_wait_for_examine" 00:08:35.761 } 00:08:35.761 ] 00:08:35.761 } 00:08:35.761 ] 00:08:35.761 } 00:08:35.761 [2024-12-16 14:24:27.922681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.761 [2024-12-16 14:24:27.945827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.020 [2024-12-16 14:24:27.982184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.020 [2024-12-16 14:24:28.028645] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:36.020 [2024-12-16 14:24:28.028770] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.020 [2024-12-16 14:24:28.104231] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.020 00:08:36.020 real 0m0.447s 00:08:36.020 user 0m0.269s 00:08:36.020 sys 0m0.138s 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:36.020 ************************************ 00:08:36.020 END TEST dd_bs_not_multiple 00:08:36.020 ************************************ 00:08:36.020 00:08:36.020 real 0m5.247s 00:08:36.020 user 0m2.924s 00:08:36.020 sys 0m1.750s 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.020 ************************************ 00:08:36.020 END TEST spdk_dd_negative 00:08:36.020 14:24:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.020 ************************************ 00:08:36.279 00:08:36.279 real 1m1.058s 00:08:36.279 user 0m38.267s 00:08:36.279 sys 0m26.126s 00:08:36.279 14:24:28 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.279 ************************************ 00:08:36.279 END TEST spdk_dd 00:08:36.279 ************************************ 00:08:36.279 14:24:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:36.279 14:24:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:36.279 14:24:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.279 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.279 14:24:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:36.279 14:24:28 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:36.279 14:24:28 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.279 14:24:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.279 14:24:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.279 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.279 ************************************ 00:08:36.279 START TEST nvmf_tcp 00:08:36.279 ************************************ 00:08:36.279 14:24:28 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.279 * Looking for test storage... 00:08:36.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.279 14:24:28 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.279 14:24:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.279 14:24:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.538 14:24:28 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.538 14:24:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:36.538 14:24:28 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:36.539 14:24:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.539 14:24:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.539 14:24:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.539 ************************************ 00:08:36.539 START TEST nvmf_target_core 00:08:36.539 ************************************ 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.539 * Looking for test storage... 00:08:36.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.539 --rc genhtml_branch_coverage=1 00:08:36.539 --rc genhtml_function_coverage=1 00:08:36.539 --rc genhtml_legend=1 00:08:36.539 --rc geninfo_all_blocks=1 00:08:36.539 --rc geninfo_unexecuted_blocks=1 00:08:36.539 00:08:36.539 ' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.539 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.799 14:24:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.800 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.800 ************************************ 00:08:36.800 START TEST nvmf_host_management 00:08:36.800 ************************************ 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.800 * Looking for test storage... 00:08:36.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.800 --rc genhtml_branch_coverage=1 00:08:36.800 --rc genhtml_function_coverage=1 00:08:36.800 --rc genhtml_legend=1 00:08:36.800 --rc geninfo_all_blocks=1 00:08:36.800 --rc geninfo_unexecuted_blocks=1 00:08:36.800 00:08:36.800 ' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.800 --rc genhtml_branch_coverage=1 00:08:36.800 --rc genhtml_function_coverage=1 00:08:36.800 --rc genhtml_legend=1 00:08:36.800 --rc geninfo_all_blocks=1 00:08:36.800 --rc geninfo_unexecuted_blocks=1 00:08:36.800 00:08:36.800 ' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.800 --rc genhtml_branch_coverage=1 00:08:36.800 --rc genhtml_function_coverage=1 00:08:36.800 --rc genhtml_legend=1 00:08:36.800 --rc geninfo_all_blocks=1 00:08:36.800 --rc geninfo_unexecuted_blocks=1 00:08:36.800 00:08:36.800 ' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.800 --rc genhtml_branch_coverage=1 00:08:36.800 --rc genhtml_function_coverage=1 00:08:36.800 --rc genhtml_legend=1 00:08:36.800 --rc geninfo_all_blocks=1 00:08:36.800 --rc geninfo_unexecuted_blocks=1 00:08:36.800 00:08:36.800 ' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.800 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.801 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:36.801 Cannot find device "nvmf_init_br" 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:36.801 Cannot find device "nvmf_init_br2" 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:36.801 14:24:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:37.060 Cannot find device "nvmf_tgt_br" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.060 Cannot find device "nvmf_tgt_br2" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:37.060 Cannot find device "nvmf_init_br" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:37.060 Cannot find device "nvmf_init_br2" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:37.060 Cannot find device "nvmf_tgt_br" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:37.060 Cannot find device "nvmf_tgt_br2" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:37.060 Cannot find device "nvmf_br" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:37.060 Cannot find device "nvmf_init_if" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:37.060 Cannot find device "nvmf_init_if2" 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.060 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:37.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:08:37.319 00:08:37.319 --- 10.0.0.3 ping statistics --- 00:08:37.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.319 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:37.319 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:37.319 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:37.319 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:08:37.319 00:08:37.320 --- 10.0.0.4 ping statistics --- 00:08:37.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.320 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:37.320 00:08:37.320 --- 10.0.0.1 ping statistics --- 00:08:37.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.320 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:37.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:37.320 00:08:37.320 --- 10.0.0.2 ping statistics --- 00:08:37.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.320 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=76097 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 76097 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76097 ']' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.320 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.579 [2024-12-16 14:24:29.575685] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:37.579 [2024-12-16 14:24:29.575784] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.579 [2024-12-16 14:24:29.733334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.579 [2024-12-16 14:24:29.762979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.579 [2024-12-16 14:24:29.763234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.579 [2024-12-16 14:24:29.763543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.579 [2024-12-16 14:24:29.763795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.579 [2024-12-16 14:24:29.763930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.579 [2024-12-16 14:24:29.765046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.579 [2024-12-16 14:24:29.765136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.579 [2024-12-16 14:24:29.765228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.579 [2024-12-16 14:24:29.765404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.838 [2024-12-16 14:24:29.803937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.838 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.839 [2024-12-16 14:24:29.899333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.839 Malloc0 00:08:37.839 [2024-12-16 14:24:29.969887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.839 14:24:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=76149 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 76149 /var/tmp/bdevperf.sock 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 76149 ']' 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.839 { 00:08:37.839 "params": { 00:08:37.839 "name": "Nvme$subsystem", 00:08:37.839 "trtype": "$TEST_TRANSPORT", 00:08:37.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.839 "adrfam": "ipv4", 00:08:37.839 "trsvcid": "$NVMF_PORT", 00:08:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.839 "hdgst": ${hdgst:-false}, 00:08:37.839 "ddgst": ${ddgst:-false} 00:08:37.839 }, 00:08:37.839 "method": "bdev_nvme_attach_controller" 00:08:37.839 } 00:08:37.839 EOF 00:08:37.839 )") 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:37.839 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.839 "params": { 00:08:37.839 "name": "Nvme0", 00:08:37.839 "trtype": "tcp", 00:08:37.839 "traddr": "10.0.0.3", 00:08:37.839 "adrfam": "ipv4", 00:08:37.839 "trsvcid": "4420", 00:08:37.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:37.839 "hdgst": false, 00:08:37.839 "ddgst": false 00:08:37.839 }, 00:08:37.839 "method": "bdev_nvme_attach_controller" 00:08:37.839 }' 00:08:38.098 [2024-12-16 14:24:30.074192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:38.098 [2024-12-16 14:24:30.074279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:08:38.098 [2024-12-16 14:24:30.225391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.098 [2024-12-16 14:24:30.250193] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.098 [2024-12-16 14:24:30.295855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.357 Running I/O for 10 seconds... 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:38.357 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.617 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=457 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 457 -ge 100 ']' 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.877 14:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:38.877 [2024-12-16 14:24:30.884330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.877 [2024-12-16 14:24:30.884716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.877 [2024-12-16 14:24:30.884727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.884988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.878 [2024-12-16 14:24:30.885575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.878 [2024-12-16 14:24:30.885585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.879 [2024-12-16 14:24:30.885805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.885816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd32160 is same with the state(6) to be set 00:08:38.879 [2024-12-16 14:24:30.885980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.879 [2024-12-16 14:24:30.886008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.886021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.879 [2024-12-16 14:24:30.886030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.886040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.879 [2024-12-16 14:24:30.886049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.886073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:38.879 [2024-12-16 14:24:30.886082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:38.879 [2024-12-16 14:24:30.886092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0830 is same with the state(6) to be set 00:08:38.879 [2024-12-16 14:24:30.887254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:38.879 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:38.879 00:08:38.879 Latency(us) 00:08:38.879 [2024-12-16T14:24:31.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.879 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:38.879 Job: Nvme0n1 ended in about 0.48 seconds with error 00:08:38.879 Verification LBA range: start 0x0 length 0x400 00:08:38.879 Nvme0n1 : 0.48 1196.30 74.77 132.92 0.00 46338.59 2174.60 50998.92 00:08:38.879 [2024-12-16T14:24:31.079Z] =================================================================================================================== 00:08:38.879 [2024-12-16T14:24:31.079Z] Total : 1196.30 74.77 132.92 0.00 46338.59 2174.60 50998.92 00:08:38.879 [2024-12-16 14:24:30.889306] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.879 [2024-12-16 14:24:30.889335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd0830 (9): Bad file descriptor 00:08:38.879 [2024-12-16 14:24:30.892291] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 76149 00:08:39.817 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (76149) - No such process 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.817 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.817 { 00:08:39.817 "params": { 00:08:39.817 "name": "Nvme$subsystem", 00:08:39.817 "trtype": "$TEST_TRANSPORT", 00:08:39.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.817 "adrfam": "ipv4", 00:08:39.817 "trsvcid": "$NVMF_PORT", 00:08:39.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.818 "hdgst": ${hdgst:-false}, 00:08:39.818 "ddgst": ${ddgst:-false} 00:08:39.818 }, 00:08:39.818 "method": "bdev_nvme_attach_controller" 00:08:39.818 } 00:08:39.818 EOF 00:08:39.818 )") 00:08:39.818 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.818 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.818 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.818 14:24:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.818 "params": { 00:08:39.818 "name": "Nvme0", 00:08:39.818 "trtype": "tcp", 00:08:39.818 "traddr": "10.0.0.3", 00:08:39.818 "adrfam": "ipv4", 00:08:39.818 "trsvcid": "4420", 00:08:39.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.818 "hdgst": false, 00:08:39.818 "ddgst": false 00:08:39.818 }, 00:08:39.818 "method": "bdev_nvme_attach_controller" 00:08:39.818 }' 00:08:39.818 [2024-12-16 14:24:31.938328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:39.818 [2024-12-16 14:24:31.938456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76189 ] 00:08:40.077 [2024-12-16 14:24:32.087881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.077 [2024-12-16 14:24:32.109986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.077 [2024-12-16 14:24:32.149893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.077 Running I/O for 1 seconds... 00:08:41.454 1472.00 IOPS, 92.00 MiB/s 00:08:41.454 Latency(us) 00:08:41.454 [2024-12-16T14:24:33.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.454 Verification LBA range: start 0x0 length 0x400 00:08:41.454 Nvme0n1 : 1.01 1521.98 95.12 0.00 0.00 41216.99 3961.95 37653.41 00:08:41.454 [2024-12-16T14:24:33.654Z] =================================================================================================================== 00:08:41.454 [2024-12-16T14:24:33.654Z] Total : 1521.98 95.12 0.00 0.00 41216.99 3961.95 37653.41 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.454 rmmod nvme_tcp 00:08:41.454 rmmod nvme_fabrics 00:08:41.454 rmmod nvme_keyring 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 76097 ']' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 76097 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 76097 ']' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 76097 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76097 00:08:41.454 killing process with pid 76097 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76097' 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 76097 00:08:41.454 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 76097 00:08:41.713 [2024-12-16 14:24:33.714865] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:41.713 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:41.972 00:08:41.972 real 0m5.231s 00:08:41.972 user 0m18.185s 00:08:41.972 sys 0m1.430s 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.972 14:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.972 ************************************ 00:08:41.972 END TEST nvmf_host_management 00:08:41.972 ************************************ 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.972 ************************************ 00:08:41.972 START TEST nvmf_lvol 00:08:41.972 ************************************ 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:41.972 * Looking for test storage... 00:08:41.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.972 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.232 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.233 --rc genhtml_branch_coverage=1 00:08:42.233 --rc genhtml_function_coverage=1 00:08:42.233 --rc genhtml_legend=1 00:08:42.233 --rc geninfo_all_blocks=1 00:08:42.233 --rc geninfo_unexecuted_blocks=1 00:08:42.233 00:08:42.233 ' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.233 --rc genhtml_branch_coverage=1 00:08:42.233 --rc genhtml_function_coverage=1 00:08:42.233 --rc genhtml_legend=1 00:08:42.233 --rc geninfo_all_blocks=1 00:08:42.233 --rc geninfo_unexecuted_blocks=1 00:08:42.233 00:08:42.233 ' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.233 --rc genhtml_branch_coverage=1 00:08:42.233 --rc genhtml_function_coverage=1 00:08:42.233 --rc genhtml_legend=1 00:08:42.233 --rc geninfo_all_blocks=1 00:08:42.233 --rc geninfo_unexecuted_blocks=1 00:08:42.233 00:08:42.233 ' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.233 --rc genhtml_branch_coverage=1 00:08:42.233 --rc genhtml_function_coverage=1 00:08:42.233 --rc genhtml_legend=1 00:08:42.233 --rc geninfo_all_blocks=1 00:08:42.233 --rc geninfo_unexecuted_blocks=1 00:08:42.233 00:08:42.233 ' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:42.233 Cannot find device "nvmf_init_br" 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:42.233 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:42.234 Cannot find device "nvmf_init_br2" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:42.234 Cannot find device "nvmf_tgt_br" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.234 Cannot find device "nvmf_tgt_br2" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:42.234 Cannot find device "nvmf_init_br" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:42.234 Cannot find device "nvmf_init_br2" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:42.234 Cannot find device "nvmf_tgt_br" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:42.234 Cannot find device "nvmf_tgt_br2" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:42.234 Cannot find device "nvmf_br" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:42.234 Cannot find device "nvmf_init_if" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:42.234 Cannot find device "nvmf_init_if2" 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.234 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:42.493 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:42.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:42.494 00:08:42.494 --- 10.0.0.3 ping statistics --- 00:08:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.494 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:42.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:42.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:42.494 00:08:42.494 --- 10.0.0.4 ping statistics --- 00:08:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.494 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:42.494 00:08:42.494 --- 10.0.0.1 ping statistics --- 00:08:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.494 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:42.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:42.494 00:08:42.494 --- 10.0.0.2 ping statistics --- 00:08:42.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.494 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=76450 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 76450 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 76450 ']' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.494 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 [2024-12-16 14:24:34.680362] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:42.494 [2024-12-16 14:24:34.680483] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.753 [2024-12-16 14:24:34.828626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.753 [2024-12-16 14:24:34.851545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.753 [2024-12-16 14:24:34.851612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.753 [2024-12-16 14:24:34.851626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.753 [2024-12-16 14:24:34.851641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.753 [2024-12-16 14:24:34.851650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.753 [2024-12-16 14:24:34.852490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.753 [2024-12-16 14:24:34.852612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.753 [2024-12-16 14:24:34.852631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.753 [2024-12-16 14:24:34.884861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.753 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.753 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:42.753 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.753 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.753 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.012 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.012 14:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.012 [2024-12-16 14:24:35.184701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.012 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.270 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:43.270 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.528 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:43.528 14:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:44.096 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:44.096 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=34c3f6b4-0c89-4325-98d6-b3843b0e3a07 00:08:44.096 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 34c3f6b4-0c89-4325-98d6-b3843b0e3a07 lvol 20 00:08:44.355 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=92b1ee89-b304-49a5-aee1-8c4d18ab6746 00:08:44.355 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:44.614 14:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92b1ee89-b304-49a5-aee1-8c4d18ab6746 00:08:44.873 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:45.131 [2024-12-16 14:24:37.244619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.131 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:45.390 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=76518 00:08:45.390 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:45.390 14:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:46.326 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 92b1ee89-b304-49a5-aee1-8c4d18ab6746 MY_SNAPSHOT 00:08:46.893 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e67ddf2a-b7ae-44c1-9e61-bb654c4dbb59 00:08:46.893 14:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 92b1ee89-b304-49a5-aee1-8c4d18ab6746 30 00:08:47.152 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e67ddf2a-b7ae-44c1-9e61-bb654c4dbb59 MY_CLONE 00:08:47.410 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cc9fe292-edcd-4988-8b5c-3c11574b223a 00:08:47.410 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate cc9fe292-edcd-4988-8b5c-3c11574b223a 00:08:47.976 14:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 76518 00:08:56.100 Initializing NVMe Controllers 00:08:56.100 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:56.100 Controller IO queue size 128, less than required. 00:08:56.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.100 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:56.100 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:56.100 Initialization complete. Launching workers. 00:08:56.100 ======================================================== 00:08:56.100 Latency(us) 00:08:56.100 Device Information : IOPS MiB/s Average min max 00:08:56.101 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10744.80 41.97 11921.87 1762.98 63687.94 00:08:56.101 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10778.80 42.10 11874.09 2097.22 72348.95 00:08:56.101 ======================================================== 00:08:56.101 Total : 21523.60 84.08 11897.94 1762.98 72348.95 00:08:56.101 00:08:56.101 14:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.101 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 92b1ee89-b304-49a5-aee1-8c4d18ab6746 00:08:56.359 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34c3f6b4-0c89-4325-98d6-b3843b0e3a07 00:08:56.618 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:56.618 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:56.618 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.619 rmmod nvme_tcp 00:08:56.619 rmmod nvme_fabrics 00:08:56.619 rmmod nvme_keyring 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 76450 ']' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 76450 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 76450 ']' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 76450 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76450 00:08:56.619 killing process with pid 76450 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76450' 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 76450 00:08:56.619 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 76450 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:56.878 14:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:56.878 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:56.878 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:56.878 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:57.137 ************************************ 00:08:57.137 END TEST nvmf_lvol 00:08:57.137 ************************************ 00:08:57.137 00:08:57.137 real 0m15.128s 00:08:57.137 user 1m3.063s 00:08:57.137 sys 0m4.169s 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.137 ************************************ 00:08:57.137 START TEST nvmf_lvs_grow 00:08:57.137 ************************************ 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:57.137 * Looking for test storage... 00:08:57.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.137 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.397 --rc genhtml_branch_coverage=1 00:08:57.397 --rc genhtml_function_coverage=1 00:08:57.397 --rc genhtml_legend=1 00:08:57.397 --rc geninfo_all_blocks=1 00:08:57.397 --rc geninfo_unexecuted_blocks=1 00:08:57.397 00:08:57.397 ' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.397 --rc genhtml_branch_coverage=1 00:08:57.397 --rc genhtml_function_coverage=1 00:08:57.397 --rc genhtml_legend=1 00:08:57.397 --rc geninfo_all_blocks=1 00:08:57.397 --rc geninfo_unexecuted_blocks=1 00:08:57.397 00:08:57.397 ' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.397 --rc genhtml_branch_coverage=1 00:08:57.397 --rc genhtml_function_coverage=1 00:08:57.397 --rc genhtml_legend=1 00:08:57.397 --rc geninfo_all_blocks=1 00:08:57.397 --rc geninfo_unexecuted_blocks=1 00:08:57.397 00:08:57.397 ' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.397 --rc genhtml_branch_coverage=1 00:08:57.397 --rc genhtml_function_coverage=1 00:08:57.397 --rc genhtml_legend=1 00:08:57.397 --rc geninfo_all_blocks=1 00:08:57.397 --rc geninfo_unexecuted_blocks=1 00:08:57.397 00:08:57.397 ' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.397 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.398 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:57.398 Cannot find device "nvmf_init_br" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:57.398 Cannot find device "nvmf_init_br2" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:57.398 Cannot find device "nvmf_tgt_br" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.398 Cannot find device "nvmf_tgt_br2" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:57.398 Cannot find device "nvmf_init_br" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:57.398 Cannot find device "nvmf_init_br2" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:57.398 Cannot find device "nvmf_tgt_br" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:57.398 Cannot find device "nvmf_tgt_br2" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:57.398 Cannot find device "nvmf_br" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:57.398 Cannot find device "nvmf_init_if" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:57.398 Cannot find device "nvmf_init_if2" 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:57.398 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:57.399 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:57.399 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:57.399 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:57.656 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:57.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:57.657 00:08:57.657 --- 10.0.0.3 ping statistics --- 00:08:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.657 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:57.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:57.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:08:57.657 00:08:57.657 --- 10.0.0.4 ping statistics --- 00:08:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.657 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:57.657 00:08:57.657 --- 10.0.0.1 ping statistics --- 00:08:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.657 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:57.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:57.657 00:08:57.657 --- 10.0.0.2 ping statistics --- 00:08:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.657 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.657 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=76897 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:57.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 76897 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 76897 ']' 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.916 14:24:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.916 [2024-12-16 14:24:49.920873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:57.916 [2024-12-16 14:24:49.921158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.916 [2024-12-16 14:24:50.066616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.916 [2024-12-16 14:24:50.086408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.916 [2024-12-16 14:24:50.086690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.916 [2024-12-16 14:24:50.086850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.916 [2024-12-16 14:24:50.086939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.916 [2024-12-16 14:24:50.087039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.916 [2024-12-16 14:24:50.087395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.175 [2024-12-16 14:24:50.115665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.175 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:58.434 [2024-12-16 14:24:50.452149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.434 ************************************ 00:08:58.434 START TEST lvs_grow_clean 00:08:58.434 ************************************ 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:58.434 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.693 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.693 14:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.952 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:08:58.952 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:08:58.952 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.211 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.211 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.211 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f7409c5f-cfb5-4789-b1ad-efe0c225737d lvol 150 00:08:59.470 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9f92600a-505f-427e-995f-b81bcffb2675 00:08:59.470 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.470 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.729 [2024-12-16 14:24:51.859133] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.729 [2024-12-16 14:24:51.859516] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.729 true 00:08:59.729 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:08:59.729 14:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:59.988 14:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:59.988 14:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.247 14:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9f92600a-505f-427e-995f-b81bcffb2675 00:09:00.506 14:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:00.765 [2024-12-16 14:24:52.859738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:00.765 14:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76972 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76972 /var/tmp/bdevperf.sock 00:09:01.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 76972 ']' 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.024 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:01.024 [2024-12-16 14:24:53.157444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:01.024 [2024-12-16 14:24:53.157527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76972 ] 00:09:01.283 [2024-12-16 14:24:53.306551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.283 [2024-12-16 14:24:53.332660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.283 [2024-12-16 14:24:53.367217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.283 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.283 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:01.283 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.851 Nvme0n1 00:09:01.852 14:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:01.852 [ 00:09:01.852 { 00:09:01.852 "name": "Nvme0n1", 00:09:01.852 "aliases": [ 00:09:01.852 "9f92600a-505f-427e-995f-b81bcffb2675" 00:09:01.852 ], 00:09:01.852 "product_name": "NVMe disk", 00:09:01.852 "block_size": 4096, 00:09:01.852 "num_blocks": 38912, 00:09:01.852 "uuid": "9f92600a-505f-427e-995f-b81bcffb2675", 00:09:01.852 "numa_id": -1, 00:09:01.852 "assigned_rate_limits": { 00:09:01.852 "rw_ios_per_sec": 0, 00:09:01.852 "rw_mbytes_per_sec": 0, 00:09:01.852 "r_mbytes_per_sec": 0, 00:09:01.852 "w_mbytes_per_sec": 0 00:09:01.852 }, 00:09:01.852 "claimed": false, 00:09:01.852 "zoned": false, 00:09:01.852 "supported_io_types": { 00:09:01.852 "read": true, 00:09:01.852 "write": true, 00:09:01.852 "unmap": true, 00:09:01.852 "flush": true, 00:09:01.852 "reset": true, 00:09:01.852 "nvme_admin": true, 00:09:01.852 "nvme_io": true, 00:09:01.852 "nvme_io_md": false, 00:09:01.852 "write_zeroes": true, 00:09:01.852 "zcopy": false, 00:09:01.852 "get_zone_info": false, 00:09:01.852 "zone_management": false, 00:09:01.852 "zone_append": false, 00:09:01.852 "compare": true, 00:09:01.852 "compare_and_write": true, 00:09:01.852 "abort": true, 00:09:01.852 "seek_hole": false, 00:09:01.852 "seek_data": false, 00:09:01.852 "copy": true, 00:09:01.852 "nvme_iov_md": false 00:09:01.852 }, 00:09:01.852 "memory_domains": [ 00:09:01.852 { 00:09:01.852 "dma_device_id": "system", 00:09:01.852 "dma_device_type": 1 00:09:01.852 } 00:09:01.852 ], 00:09:01.852 "driver_specific": { 00:09:01.852 "nvme": [ 00:09:01.852 { 00:09:01.852 "trid": { 00:09:01.852 "trtype": "TCP", 00:09:01.852 "adrfam": "IPv4", 00:09:01.852 "traddr": "10.0.0.3", 00:09:01.852 "trsvcid": "4420", 00:09:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:01.852 }, 00:09:01.852 "ctrlr_data": { 00:09:01.852 "cntlid": 1, 00:09:01.852 "vendor_id": "0x8086", 00:09:01.852 "model_number": "SPDK bdev Controller", 00:09:01.852 "serial_number": "SPDK0", 00:09:01.852 "firmware_revision": "25.01", 00:09:01.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.852 "oacs": { 00:09:01.852 "security": 0, 00:09:01.852 "format": 0, 00:09:01.852 "firmware": 0, 00:09:01.852 "ns_manage": 0 00:09:01.852 }, 00:09:01.852 "multi_ctrlr": true, 00:09:01.852 "ana_reporting": false 00:09:01.852 }, 00:09:01.852 "vs": { 00:09:01.852 "nvme_version": "1.3" 00:09:01.852 }, 00:09:01.852 "ns_data": { 00:09:01.852 "id": 1, 00:09:01.852 "can_share": true 00:09:01.852 } 00:09:01.852 } 00:09:01.852 ], 00:09:01.852 "mp_policy": "active_passive" 00:09:01.852 } 00:09:01.852 } 00:09:01.852 ] 00:09:01.852 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.852 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76987 00:09:01.852 14:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.111 Running I/O for 10 seconds... 00:09:03.055 Latency(us) 00:09:03.055 [2024-12-16T14:24:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.055 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:03.055 [2024-12-16T14:24:55.255Z] =================================================================================================================== 00:09:03.055 [2024-12-16T14:24:55.255Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:03.055 00:09:03.992 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:03.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.992 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:03.992 [2024-12-16T14:24:56.192Z] =================================================================================================================== 00:09:03.992 [2024-12-16T14:24:56.192Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:03.992 00:09:04.250 true 00:09:04.250 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.250 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:04.817 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.817 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.817 14:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76987 00:09:05.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.076 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:05.076 [2024-12-16T14:24:57.276Z] =================================================================================================================== 00:09:05.076 [2024-12-16T14:24:57.276Z] Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:05.076 00:09:06.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.013 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:06.013 [2024-12-16T14:24:58.213Z] =================================================================================================================== 00:09:06.013 [2024-12-16T14:24:58.213Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:06.013 00:09:06.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.995 Nvme0n1 : 5.00 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:06.995 [2024-12-16T14:24:59.195Z] =================================================================================================================== 00:09:06.995 [2024-12-16T14:24:59.195Z] Total : 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:06.995 00:09:08.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.372 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:08.372 [2024-12-16T14:25:00.572Z] =================================================================================================================== 00:09:08.372 [2024-12-16T14:25:00.572Z] Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:08.372 00:09:08.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.940 Nvme0n1 : 7.00 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:08.940 [2024-12-16T14:25:01.140Z] =================================================================================================================== 00:09:08.940 [2024-12-16T14:25:01.140Z] Total : 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:09:08.940 00:09:10.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.317 Nvme0n1 : 8.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:10.317 [2024-12-16T14:25:02.517Z] =================================================================================================================== 00:09:10.317 [2024-12-16T14:25:02.517Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:10.317 00:09:11.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.253 Nvme0n1 : 9.00 6547.56 25.58 0.00 0.00 0.00 0.00 0.00 00:09:11.253 [2024-12-16T14:25:03.453Z] =================================================================================================================== 00:09:11.253 [2024-12-16T14:25:03.453Z] Total : 6547.56 25.58 0.00 0.00 0.00 0.00 0.00 00:09:11.253 00:09:12.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.189 Nvme0n1 : 10.00 6527.80 25.50 0.00 0.00 0.00 0.00 0.00 00:09:12.189 [2024-12-16T14:25:04.389Z] =================================================================================================================== 00:09:12.189 [2024-12-16T14:25:04.389Z] Total : 6527.80 25.50 0.00 0.00 0.00 0.00 0.00 00:09:12.189 00:09:12.189 00:09:12.189 Latency(us) 00:09:12.189 [2024-12-16T14:25:04.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.189 Nvme0n1 : 10.01 6532.49 25.52 0.00 0.00 19588.82 16801.05 42181.35 00:09:12.189 [2024-12-16T14:25:04.389Z] =================================================================================================================== 00:09:12.189 [2024-12-16T14:25:04.389Z] Total : 6532.49 25.52 0.00 0.00 19588.82 16801.05 42181.35 00:09:12.189 { 00:09:12.189 "results": [ 00:09:12.189 { 00:09:12.189 "job": "Nvme0n1", 00:09:12.189 "core_mask": "0x2", 00:09:12.189 "workload": "randwrite", 00:09:12.189 "status": "finished", 00:09:12.189 "queue_depth": 128, 00:09:12.189 "io_size": 4096, 00:09:12.189 "runtime": 10.012414, 00:09:12.189 "iops": 6532.490566211106, 00:09:12.189 "mibps": 25.517541274262133, 00:09:12.189 "io_failed": 0, 00:09:12.189 "io_timeout": 0, 00:09:12.189 "avg_latency_us": 19588.817211876587, 00:09:12.189 "min_latency_us": 16801.04727272727, 00:09:12.189 "max_latency_us": 42181.35272727273 00:09:12.189 } 00:09:12.189 ], 00:09:12.189 "core_count": 1 00:09:12.189 } 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76972 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 76972 ']' 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 76972 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76972 00:09:12.189 killing process with pid 76972 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76972' 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 76972 00:09:12.189 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.189 00:09:12.189 Latency(us) 00:09:12.189 [2024-12-16T14:25:04.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.189 [2024-12-16T14:25:04.389Z] =================================================================================================================== 00:09:12.189 [2024-12-16T14:25:04.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 76972 00:09:12.189 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:12.448 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.706 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:12.706 14:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.273 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.273 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:13.273 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.273 [2024-12-16 14:25:05.415053] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.273 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:13.273 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:13.274 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:13.532 request: 00:09:13.532 { 00:09:13.532 "uuid": "f7409c5f-cfb5-4789-b1ad-efe0c225737d", 00:09:13.532 "method": "bdev_lvol_get_lvstores", 00:09:13.532 "req_id": 1 00:09:13.532 } 00:09:13.532 Got JSON-RPC error response 00:09:13.532 response: 00:09:13.532 { 00:09:13.532 "code": -19, 00:09:13.532 "message": "No such device" 00:09:13.532 } 00:09:13.791 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:13.791 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.791 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:13.791 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.791 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.791 aio_bdev 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9f92600a-505f-427e-995f-b81bcffb2675 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9f92600a-505f-427e-995f-b81bcffb2675 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.050 14:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:14.309 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f92600a-505f-427e-995f-b81bcffb2675 -t 2000 00:09:14.568 [ 00:09:14.568 { 00:09:14.568 "name": "9f92600a-505f-427e-995f-b81bcffb2675", 00:09:14.568 "aliases": [ 00:09:14.568 "lvs/lvol" 00:09:14.568 ], 00:09:14.568 "product_name": "Logical Volume", 00:09:14.568 "block_size": 4096, 00:09:14.568 "num_blocks": 38912, 00:09:14.568 "uuid": "9f92600a-505f-427e-995f-b81bcffb2675", 00:09:14.568 "assigned_rate_limits": { 00:09:14.568 "rw_ios_per_sec": 0, 00:09:14.568 "rw_mbytes_per_sec": 0, 00:09:14.568 "r_mbytes_per_sec": 0, 00:09:14.568 "w_mbytes_per_sec": 0 00:09:14.568 }, 00:09:14.568 "claimed": false, 00:09:14.568 "zoned": false, 00:09:14.568 "supported_io_types": { 00:09:14.568 "read": true, 00:09:14.568 "write": true, 00:09:14.568 "unmap": true, 00:09:14.568 "flush": false, 00:09:14.568 "reset": true, 00:09:14.568 "nvme_admin": false, 00:09:14.568 "nvme_io": false, 00:09:14.568 "nvme_io_md": false, 00:09:14.568 "write_zeroes": true, 00:09:14.568 "zcopy": false, 00:09:14.568 "get_zone_info": false, 00:09:14.568 "zone_management": false, 00:09:14.568 "zone_append": false, 00:09:14.568 "compare": false, 00:09:14.568 "compare_and_write": false, 00:09:14.568 "abort": false, 00:09:14.568 "seek_hole": true, 00:09:14.568 "seek_data": true, 00:09:14.568 "copy": false, 00:09:14.568 "nvme_iov_md": false 00:09:14.568 }, 00:09:14.568 "driver_specific": { 00:09:14.568 "lvol": { 00:09:14.568 "lvol_store_uuid": "f7409c5f-cfb5-4789-b1ad-efe0c225737d", 00:09:14.568 "base_bdev": "aio_bdev", 00:09:14.568 "thin_provision": false, 00:09:14.568 "num_allocated_clusters": 38, 00:09:14.568 "snapshot": false, 00:09:14.568 "clone": false, 00:09:14.568 "esnap_clone": false 00:09:14.568 } 00:09:14.568 } 00:09:14.568 } 00:09:14.568 ] 00:09:14.568 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:14.568 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:14.568 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:14.827 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:14.827 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:14.827 14:25:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.086 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.086 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9f92600a-505f-427e-995f-b81bcffb2675 00:09:15.344 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f7409c5f-cfb5-4789-b1ad-efe0c225737d 00:09:15.603 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:15.861 14:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:16.464 ************************************ 00:09:16.464 END TEST lvs_grow_clean 00:09:16.464 ************************************ 00:09:16.464 00:09:16.464 real 0m17.860s 00:09:16.464 user 0m16.756s 00:09:16.464 sys 0m2.445s 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.464 ************************************ 00:09:16.464 START TEST lvs_grow_dirty 00:09:16.464 ************************************ 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:16.464 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.722 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:16.722 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:16.981 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=865835c8-a7a4-4524-a650-12e51b49c83a 00:09:16.981 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:16.981 14:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 865835c8-a7a4-4524-a650-12e51b49c83a lvol 150 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.240 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:17.499 [2024-12-16 14:25:09.652189] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:17.499 [2024-12-16 14:25:09.652268] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:17.499 true 00:09:17.499 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:17.499 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:17.757 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:17.757 14:25:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.015 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:18.274 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:18.533 [2024-12-16 14:25:10.612749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.533 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:18.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77237 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77237 /var/tmp/bdevperf.sock 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77237 ']' 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.792 14:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:18.792 [2024-12-16 14:25:10.937539] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:18.792 [2024-12-16 14:25:10.937837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77237 ] 00:09:19.051 [2024-12-16 14:25:11.091044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.051 [2024-12-16 14:25:11.116008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.051 [2024-12-16 14:25:11.150290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.051 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.051 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:19.051 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:19.618 Nvme0n1 00:09:19.618 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:19.618 [ 00:09:19.618 { 00:09:19.618 "name": "Nvme0n1", 00:09:19.618 "aliases": [ 00:09:19.618 "380a4a5c-3b23-49a5-bb90-7f92eb01ec50" 00:09:19.618 ], 00:09:19.618 "product_name": "NVMe disk", 00:09:19.618 "block_size": 4096, 00:09:19.618 "num_blocks": 38912, 00:09:19.618 "uuid": "380a4a5c-3b23-49a5-bb90-7f92eb01ec50", 00:09:19.618 "numa_id": -1, 00:09:19.618 "assigned_rate_limits": { 00:09:19.618 "rw_ios_per_sec": 0, 00:09:19.618 "rw_mbytes_per_sec": 0, 00:09:19.618 "r_mbytes_per_sec": 0, 00:09:19.618 "w_mbytes_per_sec": 0 00:09:19.618 }, 00:09:19.618 "claimed": false, 00:09:19.618 "zoned": false, 00:09:19.618 "supported_io_types": { 00:09:19.618 "read": true, 00:09:19.618 "write": true, 00:09:19.618 "unmap": true, 00:09:19.618 "flush": true, 00:09:19.618 "reset": true, 00:09:19.618 "nvme_admin": true, 00:09:19.618 "nvme_io": true, 00:09:19.618 "nvme_io_md": false, 00:09:19.618 "write_zeroes": true, 00:09:19.618 "zcopy": false, 00:09:19.618 "get_zone_info": false, 00:09:19.618 "zone_management": false, 00:09:19.618 "zone_append": false, 00:09:19.618 "compare": true, 00:09:19.618 "compare_and_write": true, 00:09:19.618 "abort": true, 00:09:19.618 "seek_hole": false, 00:09:19.618 "seek_data": false, 00:09:19.618 "copy": true, 00:09:19.618 "nvme_iov_md": false 00:09:19.618 }, 00:09:19.618 "memory_domains": [ 00:09:19.618 { 00:09:19.618 "dma_device_id": "system", 00:09:19.618 "dma_device_type": 1 00:09:19.618 } 00:09:19.618 ], 00:09:19.618 "driver_specific": { 00:09:19.618 "nvme": [ 00:09:19.618 { 00:09:19.618 "trid": { 00:09:19.618 "trtype": "TCP", 00:09:19.618 "adrfam": "IPv4", 00:09:19.618 "traddr": "10.0.0.3", 00:09:19.618 "trsvcid": "4420", 00:09:19.618 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:19.618 }, 00:09:19.618 "ctrlr_data": { 00:09:19.618 "cntlid": 1, 00:09:19.618 "vendor_id": "0x8086", 00:09:19.618 "model_number": "SPDK bdev Controller", 00:09:19.618 "serial_number": "SPDK0", 00:09:19.618 "firmware_revision": "25.01", 00:09:19.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:19.618 "oacs": { 00:09:19.618 "security": 0, 00:09:19.618 "format": 0, 00:09:19.618 "firmware": 0, 00:09:19.618 "ns_manage": 0 00:09:19.618 }, 00:09:19.618 "multi_ctrlr": true, 00:09:19.618 "ana_reporting": false 00:09:19.618 }, 00:09:19.618 "vs": { 00:09:19.618 "nvme_version": "1.3" 00:09:19.618 }, 00:09:19.618 "ns_data": { 00:09:19.618 "id": 1, 00:09:19.618 "can_share": true 00:09:19.618 } 00:09:19.618 } 00:09:19.618 ], 00:09:19.618 "mp_policy": "active_passive" 00:09:19.618 } 00:09:19.618 } 00:09:19.618 ] 00:09:19.618 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77253 00:09:19.618 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:19.618 14:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.877 Running I/O for 10 seconds... 00:09:20.814 Latency(us) 00:09:20.814 [2024-12-16T14:25:13.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.814 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:20.814 [2024-12-16T14:25:13.014Z] =================================================================================================================== 00:09:20.814 [2024-12-16T14:25:13.014Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:20.814 00:09:21.751 14:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:21.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.751 Nvme0n1 : 2.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:21.751 [2024-12-16T14:25:13.951Z] =================================================================================================================== 00:09:21.751 [2024-12-16T14:25:13.951Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:21.751 00:09:22.010 true 00:09:22.010 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:22.010 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:22.269 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:22.269 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:22.269 14:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 77253 00:09:22.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.837 Nvme0n1 : 3.00 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:22.837 [2024-12-16T14:25:15.037Z] =================================================================================================================== 00:09:22.837 [2024-12-16T14:25:15.037Z] Total : 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:22.837 00:09:23.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.774 Nvme0n1 : 4.00 6764.50 26.42 0.00 0.00 0.00 0.00 0.00 00:09:23.774 [2024-12-16T14:25:15.974Z] =================================================================================================================== 00:09:23.774 [2024-12-16T14:25:15.974Z] Total : 6764.50 26.42 0.00 0.00 0.00 0.00 0.00 00:09:23.774 00:09:25.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.148 Nvme0n1 : 5.00 6707.00 26.20 0.00 0.00 0.00 0.00 0.00 00:09:25.148 [2024-12-16T14:25:17.348Z] =================================================================================================================== 00:09:25.148 [2024-12-16T14:25:17.348Z] Total : 6707.00 26.20 0.00 0.00 0.00 0.00 0.00 00:09:25.148 00:09:26.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.084 Nvme0n1 : 6.00 6689.83 26.13 0.00 0.00 0.00 0.00 0.00 00:09:26.084 [2024-12-16T14:25:18.284Z] =================================================================================================================== 00:09:26.084 [2024-12-16T14:25:18.284Z] Total : 6689.83 26.13 0.00 0.00 0.00 0.00 0.00 00:09:26.084 00:09:27.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.031 Nvme0n1 : 7.00 6677.57 26.08 0.00 0.00 0.00 0.00 0.00 00:09:27.031 [2024-12-16T14:25:19.231Z] =================================================================================================================== 00:09:27.031 [2024-12-16T14:25:19.231Z] Total : 6677.57 26.08 0.00 0.00 0.00 0.00 0.00 00:09:27.031 00:09:27.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.980 Nvme0n1 : 8.00 6684.25 26.11 0.00 0.00 0.00 0.00 0.00 00:09:27.980 [2024-12-16T14:25:20.180Z] =================================================================================================================== 00:09:27.980 [2024-12-16T14:25:20.180Z] Total : 6684.25 26.11 0.00 0.00 0.00 0.00 0.00 00:09:27.980 00:09:28.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.916 Nvme0n1 : 9.00 6615.00 25.84 0.00 0.00 0.00 0.00 0.00 00:09:28.916 [2024-12-16T14:25:21.116Z] =================================================================================================================== 00:09:28.916 [2024-12-16T14:25:21.116Z] Total : 6615.00 25.84 0.00 0.00 0.00 0.00 0.00 00:09:28.916 00:09:29.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.853 Nvme0n1 : 10.00 6601.20 25.79 0.00 0.00 0.00 0.00 0.00 00:09:29.853 [2024-12-16T14:25:22.053Z] =================================================================================================================== 00:09:29.853 [2024-12-16T14:25:22.053Z] Total : 6601.20 25.79 0.00 0.00 0.00 0.00 0.00 00:09:29.853 00:09:29.853 00:09:29.853 Latency(us) 00:09:29.853 [2024-12-16T14:25:22.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.853 Nvme0n1 : 10.01 6604.14 25.80 0.00 0.00 19376.51 11796.48 132501.88 00:09:29.853 [2024-12-16T14:25:22.053Z] =================================================================================================================== 00:09:29.853 [2024-12-16T14:25:22.053Z] Total : 6604.14 25.80 0.00 0.00 19376.51 11796.48 132501.88 00:09:29.853 { 00:09:29.853 "results": [ 00:09:29.853 { 00:09:29.853 "job": "Nvme0n1", 00:09:29.853 "core_mask": "0x2", 00:09:29.853 "workload": "randwrite", 00:09:29.853 "status": "finished", 00:09:29.853 "queue_depth": 128, 00:09:29.853 "io_size": 4096, 00:09:29.853 "runtime": 10.014927, 00:09:29.853 "iops": 6604.141997240718, 00:09:29.853 "mibps": 25.797429676721556, 00:09:29.853 "io_failed": 0, 00:09:29.853 "io_timeout": 0, 00:09:29.853 "avg_latency_us": 19376.505599472195, 00:09:29.853 "min_latency_us": 11796.48, 00:09:29.853 "max_latency_us": 132501.87636363637 00:09:29.853 } 00:09:29.853 ], 00:09:29.853 "core_count": 1 00:09:29.853 } 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77237 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 77237 ']' 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 77237 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77237 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77237' 00:09:29.853 killing process with pid 77237 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 77237 00:09:29.853 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.853 00:09:29.853 Latency(us) 00:09:29.853 [2024-12-16T14:25:22.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.853 [2024-12-16T14:25:22.053Z] =================================================================================================================== 00:09:29.853 [2024-12-16T14:25:22.053Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.853 14:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 77237 00:09:30.112 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:30.371 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.630 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:30.630 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:30.889 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:30.889 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:30.889 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76897 00:09:30.889 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76897 00:09:30.889 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76897 Killed "${NVMF_APP[@]}" "$@" 00:09:30.889 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=77392 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 77392 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 77392 ']' 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.890 14:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:30.890 [2024-12-16 14:25:23.040123] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:30.890 [2024-12-16 14:25:23.040204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.149 [2024-12-16 14:25:23.179265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.149 [2024-12-16 14:25:23.197377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.149 [2024-12-16 14:25:23.197468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.149 [2024-12-16 14:25:23.197480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.149 [2024-12-16 14:25:23.197488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.149 [2024-12-16 14:25:23.197494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.149 [2024-12-16 14:25:23.197778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.149 [2024-12-16 14:25:23.224807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.149 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.408 [2024-12-16 14:25:23.596562] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:31.408 [2024-12-16 14:25:23.596841] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:31.408 [2024-12-16 14:25:23.597089] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.667 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.926 14:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 -t 2000 00:09:32.185 [ 00:09:32.185 { 00:09:32.185 "name": "380a4a5c-3b23-49a5-bb90-7f92eb01ec50", 00:09:32.185 "aliases": [ 00:09:32.185 "lvs/lvol" 00:09:32.185 ], 00:09:32.185 "product_name": "Logical Volume", 00:09:32.185 "block_size": 4096, 00:09:32.185 "num_blocks": 38912, 00:09:32.185 "uuid": "380a4a5c-3b23-49a5-bb90-7f92eb01ec50", 00:09:32.185 "assigned_rate_limits": { 00:09:32.185 "rw_ios_per_sec": 0, 00:09:32.185 "rw_mbytes_per_sec": 0, 00:09:32.185 "r_mbytes_per_sec": 0, 00:09:32.185 "w_mbytes_per_sec": 0 00:09:32.185 }, 00:09:32.185 "claimed": false, 00:09:32.185 "zoned": false, 00:09:32.185 "supported_io_types": { 00:09:32.185 "read": true, 00:09:32.185 "write": true, 00:09:32.185 "unmap": true, 00:09:32.185 "flush": false, 00:09:32.185 "reset": true, 00:09:32.185 "nvme_admin": false, 00:09:32.185 "nvme_io": false, 00:09:32.185 "nvme_io_md": false, 00:09:32.185 "write_zeroes": true, 00:09:32.185 "zcopy": false, 00:09:32.185 "get_zone_info": false, 00:09:32.185 "zone_management": false, 00:09:32.185 "zone_append": false, 00:09:32.185 "compare": false, 00:09:32.185 "compare_and_write": false, 00:09:32.185 "abort": false, 00:09:32.185 "seek_hole": true, 00:09:32.185 "seek_data": true, 00:09:32.185 "copy": false, 00:09:32.185 "nvme_iov_md": false 00:09:32.185 }, 00:09:32.185 "driver_specific": { 00:09:32.185 "lvol": { 00:09:32.185 "lvol_store_uuid": "865835c8-a7a4-4524-a650-12e51b49c83a", 00:09:32.185 "base_bdev": "aio_bdev", 00:09:32.185 "thin_provision": false, 00:09:32.185 "num_allocated_clusters": 38, 00:09:32.185 "snapshot": false, 00:09:32.185 "clone": false, 00:09:32.185 "esnap_clone": false 00:09:32.185 } 00:09:32.185 } 00:09:32.185 } 00:09:32.185 ] 00:09:32.185 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:32.185 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:32.185 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:32.443 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:32.443 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:32.443 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:32.702 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:32.702 14:25:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.961 [2024-12-16 14:25:25.030385] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:32.961 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:33.219 request: 00:09:33.219 { 00:09:33.219 "uuid": "865835c8-a7a4-4524-a650-12e51b49c83a", 00:09:33.219 "method": "bdev_lvol_get_lvstores", 00:09:33.219 "req_id": 1 00:09:33.219 } 00:09:33.219 Got JSON-RPC error response 00:09:33.219 response: 00:09:33.219 { 00:09:33.219 "code": -19, 00:09:33.219 "message": "No such device" 00:09:33.219 } 00:09:33.219 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:33.219 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.219 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.219 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.219 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.477 aio_bdev 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.477 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.736 14:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 -t 2000 00:09:33.995 [ 00:09:33.995 { 00:09:33.995 "name": "380a4a5c-3b23-49a5-bb90-7f92eb01ec50", 00:09:33.995 "aliases": [ 00:09:33.995 "lvs/lvol" 00:09:33.995 ], 00:09:33.995 "product_name": "Logical Volume", 00:09:33.995 "block_size": 4096, 00:09:33.995 "num_blocks": 38912, 00:09:33.995 "uuid": "380a4a5c-3b23-49a5-bb90-7f92eb01ec50", 00:09:33.995 "assigned_rate_limits": { 00:09:33.995 "rw_ios_per_sec": 0, 00:09:33.995 "rw_mbytes_per_sec": 0, 00:09:33.995 "r_mbytes_per_sec": 0, 00:09:33.995 "w_mbytes_per_sec": 0 00:09:33.995 }, 00:09:33.995 "claimed": false, 00:09:33.995 "zoned": false, 00:09:33.995 "supported_io_types": { 00:09:33.995 "read": true, 00:09:33.995 "write": true, 00:09:33.995 "unmap": true, 00:09:33.995 "flush": false, 00:09:33.995 "reset": true, 00:09:33.995 "nvme_admin": false, 00:09:33.995 "nvme_io": false, 00:09:33.995 "nvme_io_md": false, 00:09:33.995 "write_zeroes": true, 00:09:33.995 "zcopy": false, 00:09:33.995 "get_zone_info": false, 00:09:33.995 "zone_management": false, 00:09:33.995 "zone_append": false, 00:09:33.995 "compare": false, 00:09:33.995 "compare_and_write": false, 00:09:33.995 "abort": false, 00:09:33.995 "seek_hole": true, 00:09:33.995 "seek_data": true, 00:09:33.995 "copy": false, 00:09:33.995 "nvme_iov_md": false 00:09:33.995 }, 00:09:33.995 "driver_specific": { 00:09:33.995 "lvol": { 00:09:33.995 "lvol_store_uuid": "865835c8-a7a4-4524-a650-12e51b49c83a", 00:09:33.995 "base_bdev": "aio_bdev", 00:09:33.995 "thin_provision": false, 00:09:33.995 "num_allocated_clusters": 38, 00:09:33.995 "snapshot": false, 00:09:33.995 "clone": false, 00:09:33.995 "esnap_clone": false 00:09:33.995 } 00:09:33.995 } 00:09:33.995 } 00:09:33.995 ] 00:09:33.995 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:33.995 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:33.995 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:34.253 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:34.253 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:34.253 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:34.512 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:34.512 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 380a4a5c-3b23-49a5-bb90-7f92eb01ec50 00:09:34.770 14:25:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 865835c8-a7a4-4524-a650-12e51b49c83a 00:09:35.029 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.288 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:35.546 ************************************ 00:09:35.546 END TEST lvs_grow_dirty 00:09:35.546 ************************************ 00:09:35.546 00:09:35.546 real 0m19.305s 00:09:35.546 user 0m39.071s 00:09:35.546 sys 0m8.978s 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:35.546 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:35.804 nvmf_trace.0 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.804 14:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.371 rmmod nvme_tcp 00:09:36.371 rmmod nvme_fabrics 00:09:36.371 rmmod nvme_keyring 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 77392 ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 77392 ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.371 killing process with pid 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77392' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 77392 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:36.371 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:36.630 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:36.631 00:09:36.631 real 0m39.548s 00:09:36.631 user 1m1.904s 00:09:36.631 sys 0m12.478s 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:36.631 ************************************ 00:09:36.631 END TEST nvmf_lvs_grow 00:09:36.631 ************************************ 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.631 ************************************ 00:09:36.631 START TEST nvmf_bdev_io_wait 00:09:36.631 ************************************ 00:09:36.631 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:36.890 * Looking for test storage... 00:09:36.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.890 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.891 --rc genhtml_branch_coverage=1 00:09:36.891 --rc genhtml_function_coverage=1 00:09:36.891 --rc genhtml_legend=1 00:09:36.891 --rc geninfo_all_blocks=1 00:09:36.891 --rc geninfo_unexecuted_blocks=1 00:09:36.891 00:09:36.891 ' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.891 --rc genhtml_branch_coverage=1 00:09:36.891 --rc genhtml_function_coverage=1 00:09:36.891 --rc genhtml_legend=1 00:09:36.891 --rc geninfo_all_blocks=1 00:09:36.891 --rc geninfo_unexecuted_blocks=1 00:09:36.891 00:09:36.891 ' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.891 --rc genhtml_branch_coverage=1 00:09:36.891 --rc genhtml_function_coverage=1 00:09:36.891 --rc genhtml_legend=1 00:09:36.891 --rc geninfo_all_blocks=1 00:09:36.891 --rc geninfo_unexecuted_blocks=1 00:09:36.891 00:09:36.891 ' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.891 --rc genhtml_branch_coverage=1 00:09:36.891 --rc genhtml_function_coverage=1 00:09:36.891 --rc genhtml_legend=1 00:09:36.891 --rc geninfo_all_blocks=1 00:09:36.891 --rc geninfo_unexecuted_blocks=1 00:09:36.891 00:09:36.891 ' 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.891 14:25:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:36.891 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:36.892 Cannot find device "nvmf_init_br" 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:36.892 Cannot find device "nvmf_init_br2" 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:36.892 Cannot find device "nvmf_tgt_br" 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.892 Cannot find device "nvmf_tgt_br2" 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:36.892 Cannot find device "nvmf_init_br" 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:36.892 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:37.151 Cannot find device "nvmf_init_br2" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:37.151 Cannot find device "nvmf_tgt_br" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:37.151 Cannot find device "nvmf_tgt_br2" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.151 Cannot find device "nvmf_br" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.151 Cannot find device "nvmf_init_if" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.151 Cannot find device "nvmf_init_if2" 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:37.151 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:37.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:09:37.410 00:09:37.410 --- 10.0.0.3 ping statistics --- 00:09:37.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.410 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:37.410 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:37.410 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:37.410 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:09:37.410 00:09:37.411 --- 10.0.0.4 ping statistics --- 00:09:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.411 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:37.411 00:09:37.411 --- 10.0.0.1 ping statistics --- 00:09:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.411 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:37.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:37.411 00:09:37.411 --- 10.0.0.2 ping statistics --- 00:09:37.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.411 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=77754 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 77754 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 77754 ']' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.411 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.411 [2024-12-16 14:25:29.507094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.411 [2024-12-16 14:25:29.507201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.708 [2024-12-16 14:25:29.657154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.708 [2024-12-16 14:25:29.682185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.708 [2024-12-16 14:25:29.682256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.708 [2024-12-16 14:25:29.682282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.708 [2024-12-16 14:25:29.682290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.708 [2024-12-16 14:25:29.682296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.708 [2024-12-16 14:25:29.683015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.708 [2024-12-16 14:25:29.683149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.708 [2024-12-16 14:25:29.683247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.708 [2024-12-16 14:25:29.683251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.708 [2024-12-16 14:25:29.858407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.708 [2024-12-16 14:25:29.869080] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.708 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.968 Malloc0 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.968 [2024-12-16 14:25:29.924753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77776 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77778 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.968 { 00:09:37.968 "params": { 00:09:37.968 "name": "Nvme$subsystem", 00:09:37.968 "trtype": "$TEST_TRANSPORT", 00:09:37.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.968 "adrfam": "ipv4", 00:09:37.968 "trsvcid": "$NVMF_PORT", 00:09:37.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.968 "hdgst": ${hdgst:-false}, 00:09:37.968 "ddgst": ${ddgst:-false} 00:09:37.968 }, 00:09:37.968 "method": "bdev_nvme_attach_controller" 00:09:37.968 } 00:09:37.968 EOF 00:09:37.968 )") 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77780 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.968 { 00:09:37.968 "params": { 00:09:37.968 "name": "Nvme$subsystem", 00:09:37.968 "trtype": "$TEST_TRANSPORT", 00:09:37.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.968 "adrfam": "ipv4", 00:09:37.968 "trsvcid": "$NVMF_PORT", 00:09:37.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.968 "hdgst": ${hdgst:-false}, 00:09:37.968 "ddgst": ${ddgst:-false} 00:09:37.968 }, 00:09:37.968 "method": "bdev_nvme_attach_controller" 00:09:37.968 } 00:09:37.968 EOF 00:09:37.968 )") 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77783 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.968 { 00:09:37.968 "params": { 00:09:37.968 "name": "Nvme$subsystem", 00:09:37.968 "trtype": "$TEST_TRANSPORT", 00:09:37.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.968 "adrfam": "ipv4", 00:09:37.968 "trsvcid": "$NVMF_PORT", 00:09:37.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.968 "hdgst": ${hdgst:-false}, 00:09:37.968 "ddgst": ${ddgst:-false} 00:09:37.968 }, 00:09:37.968 "method": "bdev_nvme_attach_controller" 00:09:37.968 } 00:09:37.968 EOF 00:09:37.968 )") 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.968 "params": { 00:09:37.968 "name": "Nvme1", 00:09:37.968 "trtype": "tcp", 00:09:37.968 "traddr": "10.0.0.3", 00:09:37.968 "adrfam": "ipv4", 00:09:37.968 "trsvcid": "4420", 00:09:37.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.968 "hdgst": false, 00:09:37.968 "ddgst": false 00:09:37.968 }, 00:09:37.968 "method": "bdev_nvme_attach_controller" 00:09:37.968 }' 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.968 { 00:09:37.968 "params": { 00:09:37.968 "name": "Nvme$subsystem", 00:09:37.968 "trtype": "$TEST_TRANSPORT", 00:09:37.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.968 "adrfam": "ipv4", 00:09:37.968 "trsvcid": "$NVMF_PORT", 00:09:37.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.968 "hdgst": ${hdgst:-false}, 00:09:37.968 "ddgst": ${ddgst:-false} 00:09:37.968 }, 00:09:37.968 "method": "bdev_nvme_attach_controller" 00:09:37.968 } 00:09:37.968 EOF 00:09:37.968 )") 00:09:37.968 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.969 "params": { 00:09:37.969 "name": "Nvme1", 00:09:37.969 "trtype": "tcp", 00:09:37.969 "traddr": "10.0.0.3", 00:09:37.969 "adrfam": "ipv4", 00:09:37.969 "trsvcid": "4420", 00:09:37.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.969 "hdgst": false, 00:09:37.969 "ddgst": false 00:09:37.969 }, 00:09:37.969 "method": "bdev_nvme_attach_controller" 00:09:37.969 }' 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.969 "params": { 00:09:37.969 "name": "Nvme1", 00:09:37.969 "trtype": "tcp", 00:09:37.969 "traddr": "10.0.0.3", 00:09:37.969 "adrfam": "ipv4", 00:09:37.969 "trsvcid": "4420", 00:09:37.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.969 "hdgst": false, 00:09:37.969 "ddgst": false 00:09:37.969 }, 00:09:37.969 "method": "bdev_nvme_attach_controller" 00:09:37.969 }' 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.969 "params": { 00:09:37.969 "name": "Nvme1", 00:09:37.969 "trtype": "tcp", 00:09:37.969 "traddr": "10.0.0.3", 00:09:37.969 "adrfam": "ipv4", 00:09:37.969 "trsvcid": "4420", 00:09:37.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.969 "hdgst": false, 00:09:37.969 "ddgst": false 00:09:37.969 }, 00:09:37.969 "method": "bdev_nvme_attach_controller" 00:09:37.969 }' 00:09:37.969 [2024-12-16 14:25:29.997001] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.969 [2024-12-16 14:25:29.997936] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:37.969 14:25:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77776 00:09:37.969 [2024-12-16 14:25:30.000864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.969 [2024-12-16 14:25:30.000933] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:37.969 [2024-12-16 14:25:30.001317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.969 [2024-12-16 14:25:30.001382] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:37.969 [2024-12-16 14:25:30.015044] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:37.969 [2024-12-16 14:25:30.015138] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:38.228 [2024-12-16 14:25:30.188415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.228 [2024-12-16 14:25:30.204978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.228 [2024-12-16 14:25:30.218765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.228 [2024-12-16 14:25:30.229963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.228 [2024-12-16 14:25:30.244019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:38.228 [2024-12-16 14:25:30.256816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.228 [2024-12-16 14:25:30.276360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.228 [2024-12-16 14:25:30.291937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:38.228 [2024-12-16 14:25:30.305752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.228 [2024-12-16 14:25:30.314674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.228 Running I/O for 1 seconds... 00:09:38.228 [2024-12-16 14:25:30.330838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:38.228 [2024-12-16 14:25:30.344749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.228 Running I/O for 1 seconds... 00:09:38.228 Running I/O for 1 seconds... 00:09:38.486 Running I/O for 1 seconds... 00:09:39.420 6334.00 IOPS, 24.74 MiB/s 00:09:39.420 Latency(us) 00:09:39.420 [2024-12-16T14:25:31.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.420 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:39.420 Nvme1n1 : 1.02 6362.84 24.85 0.00 0.00 20003.75 6613.18 34078.72 00:09:39.420 [2024-12-16T14:25:31.620Z] =================================================================================================================== 00:09:39.420 [2024-12-16T14:25:31.620Z] Total : 6362.84 24.85 0.00 0.00 20003.75 6613.18 34078.72 00:09:39.420 8287.00 IOPS, 32.37 MiB/s 00:09:39.420 Latency(us) 00:09:39.420 [2024-12-16T14:25:31.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.420 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:39.420 Nvme1n1 : 1.01 8321.93 32.51 0.00 0.00 15292.98 9830.40 25856.93 00:09:39.420 [2024-12-16T14:25:31.620Z] =================================================================================================================== 00:09:39.420 [2024-12-16T14:25:31.620Z] Total : 8321.93 32.51 0.00 0.00 15292.98 9830.40 25856.93 00:09:39.420 159056.00 IOPS, 621.31 MiB/s 00:09:39.420 Latency(us) 00:09:39.420 [2024-12-16T14:25:31.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.420 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:39.420 Nvme1n1 : 1.00 158730.67 620.04 0.00 0.00 802.24 376.09 2040.55 00:09:39.420 [2024-12-16T14:25:31.620Z] =================================================================================================================== 00:09:39.420 [2024-12-16T14:25:31.620Z] Total : 158730.67 620.04 0.00 0.00 802.24 376.09 2040.55 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77778 00:09:39.420 6573.00 IOPS, 25.68 MiB/s [2024-12-16T14:25:31.620Z] 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77780 00:09:39.420 00:09:39.420 Latency(us) 00:09:39.420 [2024-12-16T14:25:31.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.420 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:39.420 Nvme1n1 : 1.01 6711.62 26.22 0.00 0.00 19012.21 5183.30 41466.41 00:09:39.420 [2024-12-16T14:25:31.620Z] =================================================================================================================== 00:09:39.420 [2024-12-16T14:25:31.620Z] Total : 6711.62 26.22 0.00 0.00 19012.21 5183.30 41466.41 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77783 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.420 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.679 rmmod nvme_tcp 00:09:39.679 rmmod nvme_fabrics 00:09:39.679 rmmod nvme_keyring 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 77754 ']' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 77754 ']' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.679 killing process with pid 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77754' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 77754 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.679 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.939 14:25:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:39.939 00:09:39.939 real 0m3.230s 00:09:39.939 user 0m12.788s 00:09:39.939 sys 0m1.994s 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.939 ************************************ 00:09:39.939 END TEST nvmf_bdev_io_wait 00:09:39.939 ************************************ 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.939 ************************************ 00:09:39.939 START TEST nvmf_queue_depth 00:09:39.939 ************************************ 00:09:39.939 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:40.199 * Looking for test storage... 00:09:40.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.199 --rc genhtml_branch_coverage=1 00:09:40.199 --rc genhtml_function_coverage=1 00:09:40.199 --rc genhtml_legend=1 00:09:40.199 --rc geninfo_all_blocks=1 00:09:40.199 --rc geninfo_unexecuted_blocks=1 00:09:40.199 00:09:40.199 ' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.199 --rc genhtml_branch_coverage=1 00:09:40.199 --rc genhtml_function_coverage=1 00:09:40.199 --rc genhtml_legend=1 00:09:40.199 --rc geninfo_all_blocks=1 00:09:40.199 --rc geninfo_unexecuted_blocks=1 00:09:40.199 00:09:40.199 ' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.199 --rc genhtml_branch_coverage=1 00:09:40.199 --rc genhtml_function_coverage=1 00:09:40.199 --rc genhtml_legend=1 00:09:40.199 --rc geninfo_all_blocks=1 00:09:40.199 --rc geninfo_unexecuted_blocks=1 00:09:40.199 00:09:40.199 ' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.199 --rc genhtml_branch_coverage=1 00:09:40.199 --rc genhtml_function_coverage=1 00:09:40.199 --rc genhtml_legend=1 00:09:40.199 --rc geninfo_all_blocks=1 00:09:40.199 --rc geninfo_unexecuted_blocks=1 00:09:40.199 00:09:40.199 ' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.199 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.200 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:40.200 Cannot find device "nvmf_init_br" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:40.200 Cannot find device "nvmf_init_br2" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:40.200 Cannot find device "nvmf_tgt_br" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.200 Cannot find device "nvmf_tgt_br2" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:40.200 Cannot find device "nvmf_init_br" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:40.200 Cannot find device "nvmf_init_br2" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:40.200 Cannot find device "nvmf_tgt_br" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:40.200 Cannot find device "nvmf_tgt_br2" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:40.200 Cannot find device "nvmf_br" 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:40.200 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:40.460 Cannot find device "nvmf_init_if" 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:40.460 Cannot find device "nvmf_init_if2" 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:40.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:40.460 00:09:40.460 --- 10.0.0.3 ping statistics --- 00:09:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.460 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:40.460 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:40.460 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:09:40.460 00:09:40.460 --- 10.0.0.4 ping statistics --- 00:09:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.460 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:40.460 00:09:40.460 --- 10.0.0.1 ping statistics --- 00:09:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.460 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:40.460 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:40.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:09:40.460 00:09:40.460 --- 10.0.0.2 ping statistics --- 00:09:40.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.460 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=78043 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 78043 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78043 ']' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.719 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.719 [2024-12-16 14:25:32.735797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:40.719 [2024-12-16 14:25:32.735905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.719 [2024-12-16 14:25:32.884192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.719 [2024-12-16 14:25:32.904755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.719 [2024-12-16 14:25:32.904821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.719 [2024-12-16 14:25:32.904832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.719 [2024-12-16 14:25:32.904839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.719 [2024-12-16 14:25:32.904846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.719 [2024-12-16 14:25:32.905155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.978 [2024-12-16 14:25:32.933871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.978 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.978 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:40.978 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.979 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.979 14:25:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 [2024-12-16 14:25:33.025988] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 Malloc0 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 [2024-12-16 14:25:33.068589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78062 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78062 /var/tmp/bdevperf.sock 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 78062 ']' 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.979 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 [2024-12-16 14:25:33.129939] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:40.979 [2024-12-16 14:25:33.130044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78062 ] 00:09:41.238 [2024-12-16 14:25:33.281940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.238 [2024-12-16 14:25:33.306931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.238 [2024-12-16 14:25:33.341215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.238 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.238 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:41.238 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:41.238 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.238 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.496 NVMe0n1 00:09:41.496 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.496 14:25:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.496 Running I/O for 10 seconds... 00:09:43.809 7163.00 IOPS, 27.98 MiB/s [2024-12-16T14:25:36.944Z] 7683.50 IOPS, 30.01 MiB/s [2024-12-16T14:25:37.878Z] 7910.33 IOPS, 30.90 MiB/s [2024-12-16T14:25:38.813Z] 8118.00 IOPS, 31.71 MiB/s [2024-12-16T14:25:39.748Z] 8292.60 IOPS, 32.39 MiB/s [2024-12-16T14:25:40.695Z] 8458.83 IOPS, 33.04 MiB/s [2024-12-16T14:25:41.665Z] 8520.71 IOPS, 33.28 MiB/s [2024-12-16T14:25:43.043Z] 8590.38 IOPS, 33.56 MiB/s [2024-12-16T14:25:43.611Z] 8668.11 IOPS, 33.86 MiB/s [2024-12-16T14:25:43.870Z] 8784.80 IOPS, 34.32 MiB/s 00:09:51.670 Latency(us) 00:09:51.670 [2024-12-16T14:25:43.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.670 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:51.670 Verification LBA range: start 0x0 length 0x4000 00:09:51.670 NVMe0n1 : 10.07 8808.34 34.41 0.00 0.00 115661.08 19899.11 88652.33 00:09:51.670 [2024-12-16T14:25:43.870Z] =================================================================================================================== 00:09:51.670 [2024-12-16T14:25:43.870Z] Total : 8808.34 34.41 0.00 0.00 115661.08 19899.11 88652.33 00:09:51.670 { 00:09:51.670 "results": [ 00:09:51.670 { 00:09:51.670 "job": "NVMe0n1", 00:09:51.670 "core_mask": "0x1", 00:09:51.670 "workload": "verify", 00:09:51.670 "status": "finished", 00:09:51.670 "verify_range": { 00:09:51.670 "start": 0, 00:09:51.670 "length": 16384 00:09:51.670 }, 00:09:51.670 "queue_depth": 1024, 00:09:51.670 "io_size": 4096, 00:09:51.670 "runtime": 10.072612, 00:09:51.670 "iops": 8808.34087523673, 00:09:51.670 "mibps": 34.40758154389348, 00:09:51.670 "io_failed": 0, 00:09:51.670 "io_timeout": 0, 00:09:51.670 "avg_latency_us": 115661.08053285352, 00:09:51.670 "min_latency_us": 19899.112727272728, 00:09:51.670 "max_latency_us": 88652.33454545455 00:09:51.670 } 00:09:51.670 ], 00:09:51.670 "core_count": 1 00:09:51.670 } 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78062 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78062 ']' 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78062 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78062 00:09:51.670 killing process with pid 78062 00:09:51.670 Received shutdown signal, test time was about 10.000000 seconds 00:09:51.670 00:09:51.670 Latency(us) 00:09:51.670 [2024-12-16T14:25:43.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.670 [2024-12-16T14:25:43.870Z] =================================================================================================================== 00:09:51.670 [2024-12-16T14:25:43.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78062' 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78062 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78062 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.670 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.930 rmmod nvme_tcp 00:09:51.930 rmmod nvme_fabrics 00:09:51.930 rmmod nvme_keyring 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 78043 ']' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 78043 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 78043 ']' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 78043 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78043 00:09:51.930 killing process with pid 78043 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78043' 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 78043 00:09:51.930 14:25:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 78043 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:51.930 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:52.189 00:09:52.189 real 0m12.264s 00:09:52.189 user 0m20.938s 00:09:52.189 sys 0m2.142s 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.189 ************************************ 00:09:52.189 END TEST nvmf_queue_depth 00:09:52.189 ************************************ 00:09:52.189 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.449 14:25:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:52.449 14:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.450 ************************************ 00:09:52.450 START TEST nvmf_target_multipath 00:09:52.450 ************************************ 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:52.450 * Looking for test storage... 00:09:52.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.450 --rc genhtml_branch_coverage=1 00:09:52.450 --rc genhtml_function_coverage=1 00:09:52.450 --rc genhtml_legend=1 00:09:52.450 --rc geninfo_all_blocks=1 00:09:52.450 --rc geninfo_unexecuted_blocks=1 00:09:52.450 00:09:52.450 ' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.450 --rc genhtml_branch_coverage=1 00:09:52.450 --rc genhtml_function_coverage=1 00:09:52.450 --rc genhtml_legend=1 00:09:52.450 --rc geninfo_all_blocks=1 00:09:52.450 --rc geninfo_unexecuted_blocks=1 00:09:52.450 00:09:52.450 ' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.450 --rc genhtml_branch_coverage=1 00:09:52.450 --rc genhtml_function_coverage=1 00:09:52.450 --rc genhtml_legend=1 00:09:52.450 --rc geninfo_all_blocks=1 00:09:52.450 --rc geninfo_unexecuted_blocks=1 00:09:52.450 00:09:52.450 ' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.450 --rc genhtml_branch_coverage=1 00:09:52.450 --rc genhtml_function_coverage=1 00:09:52.450 --rc genhtml_legend=1 00:09:52.450 --rc geninfo_all_blocks=1 00:09:52.450 --rc geninfo_unexecuted_blocks=1 00:09:52.450 00:09:52.450 ' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.450 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:52.451 Cannot find device "nvmf_init_br" 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:52.451 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:52.710 Cannot find device "nvmf_init_br2" 00:09:52.710 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:52.711 Cannot find device "nvmf_tgt_br" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.711 Cannot find device "nvmf_tgt_br2" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:52.711 Cannot find device "nvmf_init_br" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:52.711 Cannot find device "nvmf_init_br2" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:52.711 Cannot find device "nvmf_tgt_br" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:52.711 Cannot find device "nvmf_tgt_br2" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:52.711 Cannot find device "nvmf_br" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:52.711 Cannot find device "nvmf_init_if" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:52.711 Cannot find device "nvmf_init_if2" 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.711 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:52.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:09:52.971 00:09:52.971 --- 10.0.0.3 ping statistics --- 00:09:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.971 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:52.971 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:52.971 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:52.971 00:09:52.971 --- 10.0.0.4 ping statistics --- 00:09:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.971 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:52.971 14:25:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:52.971 00:09:52.971 --- 10.0.0.1 ping statistics --- 00:09:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.971 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:52.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:52.971 00:09:52.971 --- 10.0.0.2 ping statistics --- 00:09:52.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.971 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=78435 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 78435 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 78435 ']' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.971 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.971 [2024-12-16 14:25:45.095512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:52.971 [2024-12-16 14:25:45.096231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.230 [2024-12-16 14:25:45.248406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.230 [2024-12-16 14:25:45.274870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.230 [2024-12-16 14:25:45.275163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.230 [2024-12-16 14:25:45.275341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.230 [2024-12-16 14:25:45.275527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.230 [2024-12-16 14:25:45.275706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.230 [2024-12-16 14:25:45.276636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.230 [2024-12-16 14:25:45.276697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.230 [2024-12-16 14:25:45.276772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.230 [2024-12-16 14:25:45.276770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.230 [2024-12-16 14:25:45.312575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:53.230 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.230 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:53.230 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.230 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.230 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:53.488 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.488 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:53.747 [2024-12-16 14:25:45.725761] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.747 14:25:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:54.006 Malloc0 00:09:54.006 14:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:54.264 14:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.522 14:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:54.781 [2024-12-16 14:25:46.879187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:54.781 14:25:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:55.040 [2024-12-16 14:25:47.143436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:55.040 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:55.298 14:25:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=78517 00:09:57.830 14:25:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:57.830 [global] 00:09:57.830 thread=1 00:09:57.830 invalidate=1 00:09:57.830 rw=randrw 00:09:57.830 time_based=1 00:09:57.830 runtime=6 00:09:57.830 ioengine=libaio 00:09:57.830 direct=1 00:09:57.830 bs=4096 00:09:57.830 iodepth=128 00:09:57.830 norandommap=0 00:09:57.830 numjobs=1 00:09:57.830 00:09:57.830 verify_dump=1 00:09:57.830 verify_backlog=512 00:09:57.830 verify_state_save=0 00:09:57.830 do_verify=1 00:09:57.830 verify=crc32c-intel 00:09:57.830 [job0] 00:09:57.830 filename=/dev/nvme0n1 00:09:57.830 Could not set queue depth (nvme0n1) 00:09:57.830 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.830 fio-3.35 00:09:57.830 Starting 1 thread 00:09:58.397 14:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:58.655 14:25:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:58.970 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:59.229 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:59.487 14:25:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 78517 00:10:03.673 00:10:03.673 job0: (groupid=0, jobs=1): err= 0: pid=78544: Mon Dec 16 14:25:55 2024 00:10:03.673 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(238MiB/6007msec) 00:10:03.673 slat (usec): min=7, max=7909, avg=58.69, stdev=232.70 00:10:03.673 clat (usec): min=1518, max=16383, avg=8581.21, stdev=1604.42 00:10:03.673 lat (usec): min=1527, max=16418, avg=8639.90, stdev=1609.90 00:10:03.673 clat percentiles (usec): 00:10:03.673 | 1.00th=[ 4359], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7635], 00:10:03.673 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:10:03.674 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10552], 95.00th=[12125], 00:10:03.674 | 99.00th=[13566], 99.50th=[14222], 99.90th=[15139], 99.95th=[15664], 00:10:03.674 | 99.99th=[16188] 00:10:03.674 bw ( KiB/s): min= 6328, max=27736, per=51.01%, avg=20728.00, stdev=6474.31, samples=11 00:10:03.674 iops : min= 1582, max= 6934, avg=5182.00, stdev=1618.58, samples=11 00:10:03.674 write: IOPS=5938, BW=23.2MiB/s (24.3MB/s)(123MiB/5324msec); 0 zone resets 00:10:03.674 slat (usec): min=15, max=2268, avg=66.30, stdev=165.92 00:10:03.674 clat (usec): min=1326, max=15750, avg=7538.42, stdev=1480.03 00:10:03.674 lat (usec): min=1349, max=15776, avg=7604.71, stdev=1487.35 00:10:03.674 clat percentiles (usec): 00:10:03.674 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 5932], 20.00th=[ 6849], 00:10:03.674 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:03.674 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[ 9896], 00:10:03.674 | 99.00th=[11731], 99.50th=[12387], 99.90th=[13566], 99.95th=[14222], 00:10:03.674 | 99.99th=[14484] 00:10:03.674 bw ( KiB/s): min= 6752, max=27552, per=87.63%, avg=20814.55, stdev=6259.67, samples=11 00:10:03.674 iops : min= 1688, max= 6888, avg=5203.64, stdev=1564.92, samples=11 00:10:03.674 lat (msec) : 2=0.03%, 4=1.61%, 10=88.71%, 20=9.65% 00:10:03.674 cpu : usr=5.59%, sys=20.80%, ctx=5406, majf=0, minf=108 00:10:03.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:03.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.674 issued rwts: total=61025,31615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.674 00:10:03.674 Run status group 0 (all jobs): 00:10:03.674 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=238MiB (250MB), run=6007-6007msec 00:10:03.674 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=123MiB (129MB), run=5324-5324msec 00:10:03.674 00:10:03.674 Disk stats (read/write): 00:10:03.674 nvme0n1: ios=60199/31050, merge=0/0, ticks=495176/220009, in_queue=715185, util=98.61% 00:10:03.674 14:25:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=78619 00:10:04.242 14:25:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:04.242 [global] 00:10:04.242 thread=1 00:10:04.242 invalidate=1 00:10:04.242 rw=randrw 00:10:04.242 time_based=1 00:10:04.242 runtime=6 00:10:04.242 ioengine=libaio 00:10:04.242 direct=1 00:10:04.242 bs=4096 00:10:04.242 iodepth=128 00:10:04.242 norandommap=0 00:10:04.242 numjobs=1 00:10:04.242 00:10:04.242 verify_dump=1 00:10:04.242 verify_backlog=512 00:10:04.242 verify_state_save=0 00:10:04.242 do_verify=1 00:10:04.242 verify=crc32c-intel 00:10:04.242 [job0] 00:10:04.242 filename=/dev/nvme0n1 00:10:04.242 Could not set queue depth (nvme0n1) 00:10:04.500 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.500 fio-3.35 00:10:04.500 Starting 1 thread 00:10:05.438 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:05.697 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:05.957 14:25:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:06.216 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:06.475 14:25:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 78619 00:10:10.674 00:10:10.674 job0: (groupid=0, jobs=1): err= 0: pid=78640: Mon Dec 16 14:26:02 2024 00:10:10.674 read: IOPS=11.4k, BW=44.3MiB/s (46.5MB/s)(266MiB/6006msec) 00:10:10.674 slat (usec): min=2, max=5996, avg=43.06, stdev=201.95 00:10:10.674 clat (usec): min=577, max=16463, avg=7752.02, stdev=2182.25 00:10:10.674 lat (usec): min=586, max=16478, avg=7795.08, stdev=2196.45 00:10:10.674 clat percentiles (usec): 00:10:10.674 | 1.00th=[ 1205], 5.00th=[ 4015], 10.00th=[ 4883], 20.00th=[ 6063], 00:10:10.674 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:10.674 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[11731], 00:10:10.674 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[14222], 00:10:10.674 | 99.99th=[15008] 00:10:10.674 bw ( KiB/s): min=11032, max=39480, per=52.42%, avg=23801.73, stdev=7934.31, samples=11 00:10:10.674 iops : min= 2758, max= 9870, avg=5950.36, stdev=1983.48, samples=11 00:10:10.674 write: IOPS=6456, BW=25.2MiB/s (26.4MB/s)(139MiB/5514msec); 0 zone resets 00:10:10.674 slat (usec): min=4, max=1828, avg=56.19, stdev=143.28 00:10:10.674 clat (usec): min=1685, max=14892, avg=6580.72, stdev=1821.98 00:10:10.674 lat (usec): min=1702, max=14924, avg=6636.91, stdev=1838.02 00:10:10.674 clat percentiles (usec): 00:10:10.674 | 1.00th=[ 2769], 5.00th=[ 3458], 10.00th=[ 3884], 20.00th=[ 4555], 00:10:10.674 | 30.00th=[ 5342], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7504], 00:10:10.674 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:10:10.674 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12780], 99.95th=[13304], 00:10:10.674 | 99.99th=[14222] 00:10:10.674 bw ( KiB/s): min=11664, max=38568, per=92.20%, avg=23811.73, stdev=7696.87, samples=11 00:10:10.674 iops : min= 2916, max= 9642, avg=5952.91, stdev=1924.18, samples=11 00:10:10.674 lat (usec) : 750=0.01%, 1000=0.10% 00:10:10.674 lat (msec) : 2=1.23%, 4=5.83%, 10=86.44%, 20=6.39% 00:10:10.674 cpu : usr=6.01%, sys=22.16%, ctx=5850, majf=0, minf=90 00:10:10.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:10.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.674 issued rwts: total=68175,35600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.674 00:10:10.674 Run status group 0 (all jobs): 00:10:10.674 READ: bw=44.3MiB/s (46.5MB/s), 44.3MiB/s-44.3MiB/s (46.5MB/s-46.5MB/s), io=266MiB (279MB), run=6006-6006msec 00:10:10.674 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=139MiB (146MB), run=5514-5514msec 00:10:10.674 00:10:10.674 Disk stats (read/write): 00:10:10.674 nvme0n1: ios=67240/35005, merge=0/0, ticks=499038/215593, in_queue=714631, util=98.68% 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:10.674 14:26:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.961 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.961 rmmod nvme_tcp 00:10:10.961 rmmod nvme_fabrics 00:10:10.961 rmmod nvme_keyring 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 78435 ']' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 78435 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 78435 ']' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 78435 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78435 00:10:11.220 killing process with pid 78435 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78435' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 78435 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 78435 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:11.220 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:11.479 00:10:11.479 real 0m19.176s 00:10:11.479 user 1m10.850s 00:10:11.479 sys 0m10.119s 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.479 ************************************ 00:10:11.479 END TEST nvmf_target_multipath 00:10:11.479 ************************************ 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.479 14:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.479 ************************************ 00:10:11.479 START TEST nvmf_zcopy 00:10:11.480 ************************************ 00:10:11.480 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.740 * Looking for test storage... 00:10:11.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.740 --rc genhtml_branch_coverage=1 00:10:11.740 --rc genhtml_function_coverage=1 00:10:11.740 --rc genhtml_legend=1 00:10:11.740 --rc geninfo_all_blocks=1 00:10:11.740 --rc geninfo_unexecuted_blocks=1 00:10:11.740 00:10:11.740 ' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.740 --rc genhtml_branch_coverage=1 00:10:11.740 --rc genhtml_function_coverage=1 00:10:11.740 --rc genhtml_legend=1 00:10:11.740 --rc geninfo_all_blocks=1 00:10:11.740 --rc geninfo_unexecuted_blocks=1 00:10:11.740 00:10:11.740 ' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.740 --rc genhtml_branch_coverage=1 00:10:11.740 --rc genhtml_function_coverage=1 00:10:11.740 --rc genhtml_legend=1 00:10:11.740 --rc geninfo_all_blocks=1 00:10:11.740 --rc geninfo_unexecuted_blocks=1 00:10:11.740 00:10:11.740 ' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.740 --rc genhtml_branch_coverage=1 00:10:11.740 --rc genhtml_function_coverage=1 00:10:11.740 --rc genhtml_legend=1 00:10:11.740 --rc geninfo_all_blocks=1 00:10:11.740 --rc geninfo_unexecuted_blocks=1 00:10:11.740 00:10:11.740 ' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.740 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:11.740 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.741 Cannot find device "nvmf_init_br" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.741 Cannot find device "nvmf_init_br2" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.741 Cannot find device "nvmf_tgt_br" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.741 Cannot find device "nvmf_tgt_br2" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.741 Cannot find device "nvmf_init_br" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.741 Cannot find device "nvmf_init_br2" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.741 Cannot find device "nvmf_tgt_br" 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:11.741 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:12.000 Cannot find device "nvmf_tgt_br2" 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:12.000 Cannot find device "nvmf_br" 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:12.000 Cannot find device "nvmf_init_if" 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:12.000 Cannot find device "nvmf_init_if2" 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.000 14:26:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.000 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.259 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.259 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.259 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:12.259 00:10:12.259 --- 10.0.0.3 ping statistics --- 00:10:12.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.260 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.260 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.260 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:12.260 00:10:12.260 --- 10.0.0.4 ping statistics --- 00:10:12.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.260 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:12.260 00:10:12.260 --- 10.0.0.1 ping statistics --- 00:10:12.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.260 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:12.260 00:10:12.260 --- 10.0.0.2 ping statistics --- 00:10:12.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.260 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=78948 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 78948 00:10:12.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 78948 ']' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.260 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.260 [2024-12-16 14:26:04.333690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:12.260 [2024-12-16 14:26:04.333992] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.519 [2024-12-16 14:26:04.481590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.519 [2024-12-16 14:26:04.500323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.519 [2024-12-16 14:26:04.500640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.519 [2024-12-16 14:26:04.500675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.519 [2024-12-16 14:26:04.500685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.519 [2024-12-16 14:26:04.500692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.519 [2024-12-16 14:26:04.501002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.519 [2024-12-16 14:26:04.528570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 [2024-12-16 14:26:04.632684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:12.519 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 [2024-12-16 14:26:04.648879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 malloc0 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:12.520 { 00:10:12.520 "params": { 00:10:12.520 "name": "Nvme$subsystem", 00:10:12.520 "trtype": "$TEST_TRANSPORT", 00:10:12.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.520 "adrfam": "ipv4", 00:10:12.520 "trsvcid": "$NVMF_PORT", 00:10:12.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.520 "hdgst": ${hdgst:-false}, 00:10:12.520 "ddgst": ${ddgst:-false} 00:10:12.520 }, 00:10:12.520 "method": "bdev_nvme_attach_controller" 00:10:12.520 } 00:10:12.520 EOF 00:10:12.520 )") 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:12.520 14:26:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:12.520 "params": { 00:10:12.520 "name": "Nvme1", 00:10:12.520 "trtype": "tcp", 00:10:12.520 "traddr": "10.0.0.3", 00:10:12.520 "adrfam": "ipv4", 00:10:12.520 "trsvcid": "4420", 00:10:12.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.520 "hdgst": false, 00:10:12.520 "ddgst": false 00:10:12.520 }, 00:10:12.520 "method": "bdev_nvme_attach_controller" 00:10:12.520 }' 00:10:12.780 [2024-12-16 14:26:04.740010] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:12.780 [2024-12-16 14:26:04.740113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78968 ] 00:10:12.780 [2024-12-16 14:26:04.889685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.780 [2024-12-16 14:26:04.914294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.780 [2024-12-16 14:26:04.955943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.046 Running I/O for 10 seconds... 00:10:14.924 6264.00 IOPS, 48.94 MiB/s [2024-12-16T14:26:08.062Z] 6308.00 IOPS, 49.28 MiB/s [2024-12-16T14:26:09.440Z] 6322.00 IOPS, 49.39 MiB/s [2024-12-16T14:26:10.378Z] 6332.00 IOPS, 49.47 MiB/s [2024-12-16T14:26:11.315Z] 6342.00 IOPS, 49.55 MiB/s [2024-12-16T14:26:12.251Z] 6359.50 IOPS, 49.68 MiB/s [2024-12-16T14:26:13.220Z] 6371.43 IOPS, 49.78 MiB/s [2024-12-16T14:26:14.156Z] 6374.75 IOPS, 49.80 MiB/s [2024-12-16T14:26:15.092Z] 6382.22 IOPS, 49.86 MiB/s [2024-12-16T14:26:15.092Z] 6377.30 IOPS, 49.82 MiB/s 00:10:22.892 Latency(us) 00:10:22.892 [2024-12-16T14:26:15.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.892 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:22.892 Verification LBA range: start 0x0 length 0x1000 00:10:22.892 Nvme1n1 : 10.01 6380.86 49.85 0.00 0.00 19996.06 2338.44 33840.41 00:10:22.892 [2024-12-16T14:26:15.092Z] =================================================================================================================== 00:10:22.892 [2024-12-16T14:26:15.092Z] Total : 6380.86 49.85 0.00 0.00 19996.06 2338.44 33840.41 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79091 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:23.156 { 00:10:23.156 "params": { 00:10:23.156 "name": "Nvme$subsystem", 00:10:23.156 "trtype": "$TEST_TRANSPORT", 00:10:23.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.156 "adrfam": "ipv4", 00:10:23.156 "trsvcid": "$NVMF_PORT", 00:10:23.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.156 "hdgst": ${hdgst:-false}, 00:10:23.156 "ddgst": ${ddgst:-false} 00:10:23.156 }, 00:10:23.156 "method": "bdev_nvme_attach_controller" 00:10:23.156 } 00:10:23.156 EOF 00:10:23.156 )") 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:23.156 [2024-12-16 14:26:15.199663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.199710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:23.156 14:26:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:23.156 "params": { 00:10:23.156 "name": "Nvme1", 00:10:23.156 "trtype": "tcp", 00:10:23.156 "traddr": "10.0.0.3", 00:10:23.156 "adrfam": "ipv4", 00:10:23.156 "trsvcid": "4420", 00:10:23.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.156 "hdgst": false, 00:10:23.156 "ddgst": false 00:10:23.156 }, 00:10:23.156 "method": "bdev_nvme_attach_controller" 00:10:23.156 }' 00:10:23.156 [2024-12-16 14:26:15.211631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.211660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.223639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.223665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.235620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.235644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.247624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.247647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.252769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:23.156 [2024-12-16 14:26:15.253495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79091 ] 00:10:23.156 [2024-12-16 14:26:15.259644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.259838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.271634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.271814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.283635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.283663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.295633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.295658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.307637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.307662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.319654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.319825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.331648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.331675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.156 [2024-12-16 14:26:15.343647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.156 [2024-12-16 14:26:15.343672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.355685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.355710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.367676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.367701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.379697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.379739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.391685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.391710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.401723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.415 [2024-12-16 14:26:15.403690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.403721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.415723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.415760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.422951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.415 [2024-12-16 14:26:15.427699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.427726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.439721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.439758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.451736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.451773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.461546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:23.415 [2024-12-16 14:26:15.463724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.463767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.475735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.475774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.487710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.487736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.499761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.499813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.511745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.511776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.523746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.523792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.535806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.535856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.547762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.547824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.559780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.559998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 Running I/O for 5 seconds... 00:10:23.415 [2024-12-16 14:26:15.571798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.571845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.589059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.589094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.415 [2024-12-16 14:26:15.605670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.415 [2024-12-16 14:26:15.605869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.622034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.622067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.639253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.639303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.653494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.653715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.667721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.667756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.684295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.684330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.700636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.700668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.717305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.717339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.734340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.734375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.752419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.752639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.767881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.768047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.786041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.786075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.801678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.801710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.817854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.818001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.833844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.833877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.848882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.849076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.675 [2024-12-16 14:26:15.859779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.675 [2024-12-16 14:26:15.859845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.875833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.875870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.890387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.890419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.906506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.906571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.925044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.925077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.939371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.939588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.956961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.956994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.971280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.971470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:15.989418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:15.989513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.005586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.005635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.019807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.019841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.036175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.036208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.052383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.052417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.068963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.069176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.086083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.086116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.102143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.102175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.934 [2024-12-16 14:26:16.119755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.934 [2024-12-16 14:26:16.119952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.135359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.135560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.151240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.151276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.167734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.167827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.184133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.184184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.201176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.201255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.218265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.218492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.234418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.234494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.252834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.252883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.268354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.268601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.285818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.285883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.302303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.302337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.320291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.320324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.335676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.335710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.346592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.346842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.361908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.362083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.377339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.377538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.194 [2024-12-16 14:26:16.387990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.194 [2024-12-16 14:26:16.388133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.404159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.404194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.418580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.418642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.433540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.433572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.449532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.449740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.466888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.467061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.483370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.483624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.500100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.500314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.517195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.517398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.533703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.533876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.550620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.550814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.566466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.566665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 11007.00 IOPS, 85.99 MiB/s [2024-12-16T14:26:16.654Z] [2024-12-16 14:26:16.581968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.582150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.591769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.591947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.608108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.608283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.625737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.625960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.454 [2024-12-16 14:26:16.640637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.454 [2024-12-16 14:26:16.640802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.657244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.657411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.674426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.674654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.691247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.691420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.707480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.707673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.723835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.724047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.741591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.741772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.757166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.757358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.772941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.772973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.789306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.789338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.806170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.806203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.823263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.823493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.837851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.837998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.854757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.854795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.872109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.872141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.887734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.887956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.713 [2024-12-16 14:26:16.898023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.713 [2024-12-16 14:26:16.898052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.914014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.914064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.929109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.929144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.944625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.944658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.963231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.963418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.978378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.978596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:16.995166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:16.995219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.012603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.012672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.027690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.027725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.043258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.043306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.059964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.059997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.076341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.076375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.092902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.092952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.109222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.109270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.126652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.126684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.141764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.141798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.151867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.152050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.973 [2024-12-16 14:26:17.167538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.973 [2024-12-16 14:26:17.167574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.182933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.182970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.200067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.200104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.216311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.216344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.233217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.233482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.248696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.248729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.264758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.264791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.281851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.281884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.297224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.297270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.313125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.313159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.329436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.329497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.346824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.346861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.363515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.363575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.380774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.380823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.396603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.396636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.408085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.408118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.232 [2024-12-16 14:26:17.423903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.232 [2024-12-16 14:26:17.423936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.439696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.439728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.457142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.457176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.472042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.472075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.481076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.481109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.496555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.496617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.512128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.512179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.530075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.530125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.546838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.546876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.562870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.562908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 11042.50 IOPS, 86.27 MiB/s [2024-12-16T14:26:17.692Z] [2024-12-16 14:26:17.580164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.580391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.596316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.596518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.612999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.613196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.630396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.630571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.646460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.646691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.657080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.657318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.672314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.672512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.492 [2024-12-16 14:26:17.689724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.492 [2024-12-16 14:26:17.689869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.705528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.705767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.722069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.722264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.739365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.739573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.754984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.755212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.772846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.773059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.787630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.787800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.804891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.805076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.820200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.820364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.829976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.830139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.845658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.845837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.868187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.868253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.883873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.883939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.899127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.899352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.915936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.915983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.751 [2024-12-16 14:26:17.933235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.751 [2024-12-16 14:26:17.933281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:17.950162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:17.950196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:17.966359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:17.966391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:17.983243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:17.983277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:17.999729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:17.999763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.018372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.018406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.033624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.033672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.043648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.043681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.058915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.059068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.074990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.075028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.085212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.085245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.100841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.100874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.116299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.116333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.126454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.126672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.143492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.143568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.159654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.159686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.175645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.175682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.011 [2024-12-16 14:26:18.194930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.011 [2024-12-16 14:26:18.194966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.210648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.210683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.226867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.226903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.244593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.244775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.260261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.260444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.270961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.271112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.285916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.286102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.301133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.301297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.318188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.318367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.332667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.332860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.350050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.350248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.366778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.366926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.382196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.382363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.393525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.393705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.410141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.410334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.426551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.426731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.444637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.444814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.270 [2024-12-16 14:26:18.459035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.270 [2024-12-16 14:26:18.459240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.475263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.475481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.494276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.494498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.507756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.507964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.524081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.524257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.540319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.540527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.556392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.556424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.565437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.565494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 11122.33 IOPS, 86.89 MiB/s [2024-12-16T14:26:18.730Z] [2024-12-16 14:26:18.581456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.581503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.596108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.596141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.612186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.612224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.628942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.628980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.645412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.645473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.662502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.662727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.677946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.678134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.694199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.694235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.530 [2024-12-16 14:26:18.710085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.530 [2024-12-16 14:26:18.710120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.729164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.729353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.744881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.744914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.753637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.753669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.769673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.769705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.781207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.781239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.797949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.797982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.813866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.813915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.831859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.832040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.846584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.846616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.856075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.856109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.871906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.871956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.882384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.882555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.896944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.896983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.913128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.913163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.922427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.922639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.937862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.938058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.953975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.954008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.971092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-12-16 14:26:18.971125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-12-16 14:26:18.988168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-12-16 14:26:18.988341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-12-16 14:26:19.004199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-12-16 14:26:19.004230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.021757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.021787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.036740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.036769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.053707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.053741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.068820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.068852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.084537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.084570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.100986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.101018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.118094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.118127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.134337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.134370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.152753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.152785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.169315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.169352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.184335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.184372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.200432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.200508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.218305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.218519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-12-16 14:26:19.233219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-12-16 14:26:19.233266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.248992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.249028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.265409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.265485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.282570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.282604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.299344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.299377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.315252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.315301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.333208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.333240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.349201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.349235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.366892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.366925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.383463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.383525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.400337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.400372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.415891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.415928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.432240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.432477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.449637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.449670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.464137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.464169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.480504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.480536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-12-16 14:26:19.498093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-12-16 14:26:19.498125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-12-16 14:26:19.512871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-12-16 14:26:19.512903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-12-16 14:26:19.529604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-12-16 14:26:19.529637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-12-16 14:26:19.544709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.544743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.560078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.560111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 11366.00 IOPS, 88.80 MiB/s [2024-12-16T14:26:19.768Z] [2024-12-16 14:26:19.577978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.578011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.593575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.593609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.610067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.610099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.628537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.628569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.644039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.644076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.660622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.660657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.677773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.677803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.694883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.695090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.709891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.710094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.726018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.726205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.742974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.743156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.568 [2024-12-16 14:26:19.759467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.568 [2024-12-16 14:26:19.759684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-12-16 14:26:19.775371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-12-16 14:26:19.775612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-12-16 14:26:19.792538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-12-16 14:26:19.792763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-12-16 14:26:19.809090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-12-16 14:26:19.809268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.826056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.826220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.841894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.842072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.859436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.859645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.873849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.874043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.891755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.891901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.907682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.907842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.925747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.925941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.940862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.941076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.956290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.956499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.967520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.967698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.982698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.982878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:19.999617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:19.999793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.827 [2024-12-16 14:26:20.014554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.827 [2024-12-16 14:26:20.014754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.025060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.025098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.040026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.040059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.057229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.057400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.072375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.072406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.089764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.089796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.086 [2024-12-16 14:26:20.104318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.086 [2024-12-16 14:26:20.104362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.119777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.119826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.128833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.129012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.143670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.143866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.158784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.158951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.174484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.174517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.183718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.183755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.200952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.201122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.216465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.216678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.226134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.226167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.240991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.241023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.250666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.250700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.087 [2024-12-16 14:26:20.266912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.087 [2024-12-16 14:26:20.266948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.285781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.285818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.301119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.301151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.310654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.310686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.326106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.326138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.342243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.342275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.359121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.359154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.375992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.376177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.392050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.392232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.408824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.409002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.426493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.426708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.442619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.442817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.459383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.459618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.475105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.475269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.493202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.493352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.507951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.508138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.524174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.524368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.346 [2024-12-16 14:26:20.540296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.346 [2024-12-16 14:26:20.540488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.556766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.556941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 11476.80 IOPS, 89.66 MiB/s [2024-12-16T14:26:20.805Z] [2024-12-16 14:26:20.572814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.573023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 00:10:28.605 Latency(us) 00:10:28.605 [2024-12-16T14:26:20.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.605 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:28.605 Nvme1n1 : 5.01 11479.56 89.68 0.00 0.00 11136.29 4170.47 19184.17 00:10:28.605 [2024-12-16T14:26:20.805Z] =================================================================================================================== 00:10:28.605 [2024-12-16T14:26:20.805Z] Total : 11479.56 89.68 0.00 0.00 11136.29 4170.47 19184.17 00:10:28.605 [2024-12-16 14:26:20.584727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.584923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.596754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.597043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.608761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.609068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.620763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.620820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.632768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.632827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.644764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.645079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.656762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.656804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.668771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.668829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.680778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.680849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.692760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.692787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 [2024-12-16 14:26:20.704755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.605 [2024-12-16 14:26:20.704783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.605 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79091) - No such process 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79091 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.605 delay0 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:28.605 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.606 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:28.606 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.606 14:26:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:28.864 [2024-12-16 14:26:20.916352] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:35.461 Initializing NVMe Controllers 00:10:35.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.461 Initialization complete. Launching workers. 00:10:35.461 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:10:35.461 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 33 00:10:35.461 success 290, unsuccessful 113, failed 0 00:10:35.461 14:26:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:35.461 14:26:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:35.461 14:26:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.461 14:26:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.461 rmmod nvme_tcp 00:10:35.461 rmmod nvme_fabrics 00:10:35.461 rmmod nvme_keyring 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 78948 ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 78948 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 78948 ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 78948 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78948 00:10:35.461 killing process with pid 78948 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78948' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 78948 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 78948 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.461 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:35.462 00:10:35.462 real 0m23.877s 00:10:35.462 user 0m39.084s 00:10:35.462 sys 0m6.664s 00:10:35.462 ************************************ 00:10:35.462 END TEST nvmf_zcopy 00:10:35.462 ************************************ 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.462 ************************************ 00:10:35.462 START TEST nvmf_nmic 00:10:35.462 ************************************ 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:35.462 * Looking for test storage... 00:10:35.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.462 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.762 --rc genhtml_branch_coverage=1 00:10:35.762 --rc genhtml_function_coverage=1 00:10:35.762 --rc genhtml_legend=1 00:10:35.762 --rc geninfo_all_blocks=1 00:10:35.762 --rc geninfo_unexecuted_blocks=1 00:10:35.762 00:10:35.762 ' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.762 --rc genhtml_branch_coverage=1 00:10:35.762 --rc genhtml_function_coverage=1 00:10:35.762 --rc genhtml_legend=1 00:10:35.762 --rc geninfo_all_blocks=1 00:10:35.762 --rc geninfo_unexecuted_blocks=1 00:10:35.762 00:10:35.762 ' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.762 --rc genhtml_branch_coverage=1 00:10:35.762 --rc genhtml_function_coverage=1 00:10:35.762 --rc genhtml_legend=1 00:10:35.762 --rc geninfo_all_blocks=1 00:10:35.762 --rc geninfo_unexecuted_blocks=1 00:10:35.762 00:10:35.762 ' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.762 --rc genhtml_branch_coverage=1 00:10:35.762 --rc genhtml_function_coverage=1 00:10:35.762 --rc genhtml_legend=1 00:10:35.762 --rc geninfo_all_blocks=1 00:10:35.762 --rc geninfo_unexecuted_blocks=1 00:10:35.762 00:10:35.762 ' 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:35.762 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.763 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.763 Cannot find device "nvmf_init_br" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.763 Cannot find device "nvmf_init_br2" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.763 Cannot find device "nvmf_tgt_br" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.763 Cannot find device "nvmf_tgt_br2" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.763 Cannot find device "nvmf_init_br" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.763 Cannot find device "nvmf_init_br2" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.763 Cannot find device "nvmf_tgt_br" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.763 Cannot find device "nvmf_tgt_br2" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.763 Cannot find device "nvmf_br" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.763 Cannot find device "nvmf_init_if" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.763 Cannot find device "nvmf_init_if2" 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.763 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.764 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.023 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.023 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.023 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.023 14:26:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:36.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:36.023 00:10:36.023 --- 10.0.0.3 ping statistics --- 00:10:36.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.023 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.023 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.023 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:36.023 00:10:36.023 --- 10.0.0.4 ping statistics --- 00:10:36.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.023 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:10:36.023 00:10:36.023 --- 10.0.0.1 ping statistics --- 00:10:36.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.023 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:36.023 00:10:36.023 --- 10.0.0.2 ping statistics --- 00:10:36.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.023 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=79461 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 79461 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 79461 ']' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.023 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.282 [2024-12-16 14:26:28.233350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:36.282 [2024-12-16 14:26:28.233647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.282 [2024-12-16 14:26:28.383832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.282 [2024-12-16 14:26:28.409566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.282 [2024-12-16 14:26:28.409623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.282 [2024-12-16 14:26:28.409636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.282 [2024-12-16 14:26:28.409647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.282 [2024-12-16 14:26:28.409656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.282 [2024-12-16 14:26:28.410532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.282 [2024-12-16 14:26:28.411181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.282 [2024-12-16 14:26:28.411551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.282 [2024-12-16 14:26:28.411579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.282 [2024-12-16 14:26:28.466458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 [2024-12-16 14:26:28.562447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 Malloc0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 [2024-12-16 14:26:28.621655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:36.541 test case1: single bdev can't be used in multiple subsystems 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.541 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.541 [2024-12-16 14:26:28.649488] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:36.541 [2024-12-16 14:26:28.649711] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:36.541 [2024-12-16 14:26:28.649977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.541 request: 00:10:36.541 { 00:10:36.541 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:36.541 "namespace": { 00:10:36.541 "bdev_name": "Malloc0", 00:10:36.541 "no_auto_visible": false, 00:10:36.541 "hide_metadata": false 00:10:36.541 }, 00:10:36.541 "method": "nvmf_subsystem_add_ns", 00:10:36.541 "req_id": 1 00:10:36.541 } 00:10:36.541 Got JSON-RPC error response 00:10:36.541 response: 00:10:36.541 { 00:10:36.541 "code": -32602, 00:10:36.541 "message": "Invalid parameters" 00:10:36.541 } 00:10:36.542 Adding namespace failed - expected result. 00:10:36.542 test case2: host connect to nvmf target in multiple paths 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.542 [2024-12-16 14:26:28.661665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.542 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:36.800 14:26:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:39.328 14:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.328 [global] 00:10:39.328 thread=1 00:10:39.328 invalidate=1 00:10:39.328 rw=write 00:10:39.329 time_based=1 00:10:39.329 runtime=1 00:10:39.329 ioengine=libaio 00:10:39.329 direct=1 00:10:39.329 bs=4096 00:10:39.329 iodepth=1 00:10:39.329 norandommap=0 00:10:39.329 numjobs=1 00:10:39.329 00:10:39.329 verify_dump=1 00:10:39.329 verify_backlog=512 00:10:39.329 verify_state_save=0 00:10:39.329 do_verify=1 00:10:39.329 verify=crc32c-intel 00:10:39.329 [job0] 00:10:39.329 filename=/dev/nvme0n1 00:10:39.329 Could not set queue depth (nvme0n1) 00:10:39.329 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.329 fio-3.35 00:10:39.329 Starting 1 thread 00:10:40.262 00:10:40.262 job0: (groupid=0, jobs=1): err= 0: pid=79545: Mon Dec 16 14:26:32 2024 00:10:40.262 read: IOPS=2969, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1000msec) 00:10:40.262 slat (nsec): min=11699, max=40612, avg=13175.64, stdev=2296.52 00:10:40.262 clat (usec): min=144, max=521, avg=182.12, stdev=23.35 00:10:40.262 lat (usec): min=156, max=535, avg=195.29, stdev=23.58 00:10:40.262 clat percentiles (usec): 00:10:40.262 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:10:40.262 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:40.262 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:10:40.262 | 99.00th=[ 225], 99.50th=[ 363], 99.90th=[ 478], 99.95th=[ 519], 00:10:40.262 | 99.99th=[ 523] 00:10:40.262 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:10:40.262 slat (usec): min=17, max=106, avg=20.90, stdev= 6.16 00:10:40.262 clat (usec): min=87, max=567, avg=112.80, stdev=18.78 00:10:40.262 lat (usec): min=106, max=600, avg=133.70, stdev=21.50 00:10:40.262 clat percentiles (usec): 00:10:40.262 | 1.00th=[ 92], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 102], 00:10:40.262 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 115], 00:10:40.262 | 70.00th=[ 118], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 137], 00:10:40.262 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 251], 99.95th=[ 553], 00:10:40.262 | 99.99th=[ 570] 00:10:40.262 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:40.262 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:40.262 lat (usec) : 100=8.18%, 250=91.43%, 500=0.31%, 750=0.08% 00:10:40.262 cpu : usr=2.80%, sys=7.50%, ctx=6041, majf=0, minf=5 00:10:40.262 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.262 issued rwts: total=2969,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.262 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.262 00:10:40.262 Run status group 0 (all jobs): 00:10:40.262 READ: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1000-1000msec 00:10:40.262 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:10:40.262 00:10:40.262 Disk stats (read/write): 00:10:40.262 nvme0n1: ios=2609/2878, merge=0/0, ticks=494/346, in_queue=840, util=91.16% 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.262 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.520 rmmod nvme_tcp 00:10:40.520 rmmod nvme_fabrics 00:10:40.520 rmmod nvme_keyring 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 79461 ']' 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 79461 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 79461 ']' 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 79461 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79461 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.520 killing process with pid 79461 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79461' 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 79461 00:10:40.520 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 79461 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:40.779 00:10:40.779 real 0m5.408s 00:10:40.779 user 0m16.002s 00:10:40.779 sys 0m2.276s 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.779 14:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.779 ************************************ 00:10:40.779 END TEST nvmf_nmic 00:10:40.779 ************************************ 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.039 ************************************ 00:10:41.039 START TEST nvmf_fio_target 00:10:41.039 ************************************ 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:41.039 * Looking for test storage... 00:10:41.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.039 --rc genhtml_branch_coverage=1 00:10:41.039 --rc genhtml_function_coverage=1 00:10:41.039 --rc genhtml_legend=1 00:10:41.039 --rc geninfo_all_blocks=1 00:10:41.039 --rc geninfo_unexecuted_blocks=1 00:10:41.039 00:10:41.039 ' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.039 --rc genhtml_branch_coverage=1 00:10:41.039 --rc genhtml_function_coverage=1 00:10:41.039 --rc genhtml_legend=1 00:10:41.039 --rc geninfo_all_blocks=1 00:10:41.039 --rc geninfo_unexecuted_blocks=1 00:10:41.039 00:10:41.039 ' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.039 --rc genhtml_branch_coverage=1 00:10:41.039 --rc genhtml_function_coverage=1 00:10:41.039 --rc genhtml_legend=1 00:10:41.039 --rc geninfo_all_blocks=1 00:10:41.039 --rc geninfo_unexecuted_blocks=1 00:10:41.039 00:10:41.039 ' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.039 --rc genhtml_branch_coverage=1 00:10:41.039 --rc genhtml_function_coverage=1 00:10:41.039 --rc genhtml_legend=1 00:10:41.039 --rc geninfo_all_blocks=1 00:10:41.039 --rc geninfo_unexecuted_blocks=1 00:10:41.039 00:10:41.039 ' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.039 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.040 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.040 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.298 Cannot find device "nvmf_init_br" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.298 Cannot find device "nvmf_init_br2" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.298 Cannot find device "nvmf_tgt_br" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.298 Cannot find device "nvmf_tgt_br2" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.298 Cannot find device "nvmf_init_br" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.298 Cannot find device "nvmf_init_br2" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.298 Cannot find device "nvmf_tgt_br" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.298 Cannot find device "nvmf_tgt_br2" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.298 Cannot find device "nvmf_br" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.298 Cannot find device "nvmf_init_if" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.298 Cannot find device "nvmf_init_if2" 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.298 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.557 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.557 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:10:41.557 00:10:41.557 --- 10.0.0.3 ping statistics --- 00:10:41.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.557 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.557 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.557 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:41.557 00:10:41.557 --- 10.0.0.4 ping statistics --- 00:10:41.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.557 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:41.557 00:10:41.557 --- 10.0.0.1 ping statistics --- 00:10:41.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.557 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:41.557 00:10:41.557 --- 10.0.0.2 ping statistics --- 00:10:41.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.557 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.557 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=79779 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 79779 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 79779 ']' 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.558 14:26:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.558 [2024-12-16 14:26:33.723089] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:41.558 [2024-12-16 14:26:33.723188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.816 [2024-12-16 14:26:33.889486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.816 [2024-12-16 14:26:33.915970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.816 [2024-12-16 14:26:33.916039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.816 [2024-12-16 14:26:33.916057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.816 [2024-12-16 14:26:33.916069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.816 [2024-12-16 14:26:33.916081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.816 [2024-12-16 14:26:33.917236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.816 [2024-12-16 14:26:33.917293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.816 [2024-12-16 14:26:33.917427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.816 [2024-12-16 14:26:33.917462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.816 [2024-12-16 14:26:33.953606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.816 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.816 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:41.816 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.816 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.816 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.074 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.074 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.333 [2024-12-16 14:26:34.275784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.333 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.591 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:42.591 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.849 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:42.849 14:26:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.106 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:43.106 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.364 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:43.364 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:43.623 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.881 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:43.881 14:26:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.139 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:44.139 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.397 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:44.397 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:44.655 14:26:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.914 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.914 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.172 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:45.172 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.431 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:45.691 [2024-12-16 14:26:37.818995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:45.691 14:26:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:45.976 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:46.249 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:46.506 14:26:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:48.403 14:26:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.403 [global] 00:10:48.403 thread=1 00:10:48.403 invalidate=1 00:10:48.403 rw=write 00:10:48.403 time_based=1 00:10:48.403 runtime=1 00:10:48.403 ioengine=libaio 00:10:48.403 direct=1 00:10:48.403 bs=4096 00:10:48.403 iodepth=1 00:10:48.403 norandommap=0 00:10:48.403 numjobs=1 00:10:48.403 00:10:48.403 verify_dump=1 00:10:48.403 verify_backlog=512 00:10:48.403 verify_state_save=0 00:10:48.403 do_verify=1 00:10:48.403 verify=crc32c-intel 00:10:48.403 [job0] 00:10:48.403 filename=/dev/nvme0n1 00:10:48.661 [job1] 00:10:48.661 filename=/dev/nvme0n2 00:10:48.661 [job2] 00:10:48.661 filename=/dev/nvme0n3 00:10:48.661 [job3] 00:10:48.661 filename=/dev/nvme0n4 00:10:48.661 Could not set queue depth (nvme0n1) 00:10:48.661 Could not set queue depth (nvme0n2) 00:10:48.661 Could not set queue depth (nvme0n3) 00:10:48.661 Could not set queue depth (nvme0n4) 00:10:48.661 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.661 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.661 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.661 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.661 fio-3.35 00:10:48.661 Starting 4 threads 00:10:50.033 00:10:50.033 job0: (groupid=0, jobs=1): err= 0: pid=79961: Mon Dec 16 14:26:41 2024 00:10:50.033 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:50.033 slat (nsec): min=10911, max=34670, avg=12760.40, stdev=1683.74 00:10:50.033 clat (usec): min=133, max=1779, avg=160.35, stdev=31.19 00:10:50.033 lat (usec): min=145, max=1791, avg=173.11, stdev=31.33 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:50.033 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:10:50.033 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:10:50.033 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 206], 99.95th=[ 212], 00:10:50.033 | 99.99th=[ 1778] 00:10:50.033 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:10:50.033 slat (nsec): min=13443, max=97601, avg=19250.31, stdev=3586.33 00:10:50.033 clat (usec): min=91, max=265, avg=121.24, stdev=11.83 00:10:50.033 lat (usec): min=109, max=362, avg=140.49, stdev=12.88 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 112], 00:10:50.033 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 125], 00:10:50.033 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 141], 00:10:50.033 | 99.00th=[ 151], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 169], 00:10:50.033 | 99.99th=[ 265] 00:10:50.033 bw ( KiB/s): min=12648, max=12648, per=30.08%, avg=12648.00, stdev= 0.00, samples=1 00:10:50.033 iops : min= 3162, max= 3162, avg=3162.00, stdev= 0.00, samples=1 00:10:50.033 lat (usec) : 100=1.07%, 250=98.90%, 500=0.02% 00:10:50.033 lat (msec) : 2=0.02% 00:10:50.033 cpu : usr=2.40%, sys=7.90%, ctx=6337, majf=0, minf=13 00:10:50.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 issued rwts: total=3072,3264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.033 job1: (groupid=0, jobs=1): err= 0: pid=79962: Mon Dec 16 14:26:41 2024 00:10:50.033 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:50.033 slat (nsec): min=11193, max=55629, avg=14786.05, stdev=5185.12 00:10:50.033 clat (usec): min=136, max=214, avg=159.87, stdev=10.95 00:10:50.033 lat (usec): min=148, max=247, avg=174.65, stdev=13.78 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:50.033 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:10:50.033 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:10:50.033 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 206], 99.95th=[ 210], 00:10:50.033 | 99.99th=[ 215] 00:10:50.033 write: IOPS=3159, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1001msec); 0 zone resets 00:10:50.033 slat (usec): min=14, max=109, avg=21.42, stdev= 6.72 00:10:50.033 clat (usec): min=92, max=285, avg=121.87, stdev=12.76 00:10:50.033 lat (usec): min=110, max=356, avg=143.28, stdev=15.81 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:10:50.033 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 125], 00:10:50.033 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:10:50.033 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 221], 99.95th=[ 247], 00:10:50.033 | 99.99th=[ 285] 00:10:50.033 bw ( KiB/s): min=12288, max=12288, per=29.22%, avg=12288.00, stdev= 0.00, samples=1 00:10:50.033 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:50.033 lat (usec) : 100=1.04%, 250=98.94%, 500=0.02% 00:10:50.033 cpu : usr=2.10%, sys=9.30%, ctx=6236, majf=0, minf=5 00:10:50.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 issued rwts: total=3072,3163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.033 job2: (groupid=0, jobs=1): err= 0: pid=79963: Mon Dec 16 14:26:41 2024 00:10:50.033 read: IOPS=1556, BW=6226KiB/s (6375kB/s)(6232KiB/1001msec) 00:10:50.033 slat (nsec): min=13942, max=45729, avg=17983.31, stdev=3931.65 00:10:50.033 clat (usec): min=161, max=2486, avg=299.02, stdev=83.69 00:10:50.033 lat (usec): min=176, max=2517, avg=317.00, stdev=85.25 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 184], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:10:50.033 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:10:50.033 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 334], 00:10:50.033 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[ 1254], 99.95th=[ 2474], 00:10:50.033 | 99.99th=[ 2474] 00:10:50.033 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.033 slat (usec): min=18, max=109, avg=25.20, stdev= 4.40 00:10:50.033 clat (usec): min=118, max=859, avg=218.34, stdev=27.85 00:10:50.033 lat (usec): min=140, max=884, avg=243.55, stdev=29.14 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 133], 5.00th=[ 188], 10.00th=[ 200], 20.00th=[ 206], 00:10:50.033 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:10:50.033 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:10:50.033 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 400], 99.95th=[ 429], 00:10:50.033 | 99.99th=[ 857] 00:10:50.033 bw ( KiB/s): min= 8192, max= 8192, per=19.48%, avg=8192.00, stdev= 0.00, samples=1 00:10:50.033 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:50.033 lat (usec) : 250=55.05%, 500=43.37%, 750=1.44%, 1000=0.08% 00:10:50.033 lat (msec) : 2=0.03%, 4=0.03% 00:10:50.033 cpu : usr=1.60%, sys=6.40%, ctx=3607, majf=0, minf=11 00:10:50.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 issued rwts: total=1558,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.033 job3: (groupid=0, jobs=1): err= 0: pid=79964: Mon Dec 16 14:26:41 2024 00:10:50.033 read: IOPS=1537, BW=6150KiB/s (6297kB/s)(6156KiB/1001msec) 00:10:50.033 slat (nsec): min=11669, max=39909, avg=15174.82, stdev=2967.07 00:10:50.033 clat (usec): min=150, max=3865, avg=306.01, stdev=124.82 00:10:50.033 lat (usec): min=165, max=3885, avg=321.19, stdev=125.50 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 192], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:10:50.033 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:10:50.033 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 371], 00:10:50.033 | 99.00th=[ 603], 99.50th=[ 881], 99.90th=[ 1795], 99.95th=[ 3851], 00:10:50.033 | 99.99th=[ 3851] 00:10:50.033 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:50.033 slat (usec): min=17, max=109, avg=22.60, stdev= 5.46 00:10:50.033 clat (usec): min=123, max=893, avg=221.49, stdev=31.36 00:10:50.033 lat (usec): min=142, max=913, avg=244.09, stdev=33.59 00:10:50.033 clat percentiles (usec): 00:10:50.033 | 1.00th=[ 141], 5.00th=[ 182], 10.00th=[ 202], 20.00th=[ 210], 00:10:50.033 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:10:50.033 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 251], 00:10:50.033 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 400], 99.95th=[ 400], 00:10:50.033 | 99.99th=[ 898] 00:10:50.033 bw ( KiB/s): min= 8192, max= 8192, per=19.48%, avg=8192.00, stdev= 0.00, samples=1 00:10:50.033 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:50.033 lat (usec) : 250=54.45%, 500=44.86%, 750=0.42%, 1000=0.08% 00:10:50.033 lat (msec) : 2=0.17%, 4=0.03% 00:10:50.033 cpu : usr=1.60%, sys=5.30%, ctx=3593, majf=0, minf=8 00:10:50.033 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.033 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.033 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.033 00:10:50.033 Run status group 0 (all jobs): 00:10:50.033 READ: bw=36.1MiB/s (37.8MB/s), 6150KiB/s-12.0MiB/s (6297kB/s-12.6MB/s), io=36.1MiB (37.9MB), run=1001-1001msec 00:10:50.033 WRITE: bw=41.1MiB/s (43.1MB/s), 8184KiB/s-12.7MiB/s (8380kB/s-13.4MB/s), io=41.1MiB (43.1MB), run=1001-1001msec 00:10:50.033 00:10:50.033 Disk stats (read/write): 00:10:50.033 nvme0n1: ios=2610/2929, merge=0/0, ticks=444/380, in_queue=824, util=88.48% 00:10:50.033 nvme0n2: ios=2597/2832, merge=0/0, ticks=446/362, in_queue=808, util=88.84% 00:10:50.033 nvme0n3: ios=1536/1536, merge=0/0, ticks=467/349, in_queue=816, util=89.18% 00:10:50.033 nvme0n4: ios=1511/1536, merge=0/0, ticks=458/358, in_queue=816, util=89.74% 00:10:50.033 14:26:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:50.033 [global] 00:10:50.033 thread=1 00:10:50.033 invalidate=1 00:10:50.033 rw=randwrite 00:10:50.033 time_based=1 00:10:50.033 runtime=1 00:10:50.033 ioengine=libaio 00:10:50.033 direct=1 00:10:50.033 bs=4096 00:10:50.033 iodepth=1 00:10:50.033 norandommap=0 00:10:50.033 numjobs=1 00:10:50.033 00:10:50.033 verify_dump=1 00:10:50.033 verify_backlog=512 00:10:50.033 verify_state_save=0 00:10:50.033 do_verify=1 00:10:50.033 verify=crc32c-intel 00:10:50.033 [job0] 00:10:50.033 filename=/dev/nvme0n1 00:10:50.033 [job1] 00:10:50.033 filename=/dev/nvme0n2 00:10:50.033 [job2] 00:10:50.033 filename=/dev/nvme0n3 00:10:50.033 [job3] 00:10:50.033 filename=/dev/nvme0n4 00:10:50.033 Could not set queue depth (nvme0n1) 00:10:50.033 Could not set queue depth (nvme0n2) 00:10:50.033 Could not set queue depth (nvme0n3) 00:10:50.033 Could not set queue depth (nvme0n4) 00:10:50.033 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.033 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.033 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.033 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.033 fio-3.35 00:10:50.033 Starting 4 threads 00:10:51.405 00:10:51.405 job0: (groupid=0, jobs=1): err= 0: pid=80017: Mon Dec 16 14:26:43 2024 00:10:51.405 read: IOPS=2858, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:10:51.405 slat (nsec): min=10687, max=37533, avg=12644.60, stdev=1711.05 00:10:51.405 clat (usec): min=138, max=535, avg=169.79, stdev=23.71 00:10:51.405 lat (usec): min=150, max=552, avg=182.43, stdev=24.39 00:10:51.405 clat percentiles (usec): 00:10:51.405 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:10:51.405 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:10:51.405 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 200], 95.00th=[ 217], 00:10:51.405 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 396], 99.95th=[ 461], 00:10:51.405 | 99.99th=[ 537] 00:10:51.405 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:51.405 slat (nsec): min=13438, max=93893, avg=19736.98, stdev=3511.34 00:10:51.405 clat (usec): min=98, max=582, avg=132.63, stdev=19.79 00:10:51.405 lat (usec): min=116, max=603, avg=152.37, stdev=21.25 00:10:51.405 clat percentiles (usec): 00:10:51.405 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:10:51.405 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:10:51.405 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 163], 00:10:51.405 | 99.00th=[ 182], 99.50th=[ 196], 99.90th=[ 343], 99.95th=[ 375], 00:10:51.405 | 99.99th=[ 586] 00:10:51.405 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:10:51.405 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:51.405 lat (usec) : 100=0.05%, 250=99.56%, 500=0.35%, 750=0.03% 00:10:51.405 cpu : usr=2.20%, sys=7.80%, ctx=5933, majf=0, minf=11 00:10:51.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.405 issued rwts: total=2861,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.405 job1: (groupid=0, jobs=1): err= 0: pid=80018: Mon Dec 16 14:26:43 2024 00:10:51.405 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(9.89MiB/1001msec) 00:10:51.405 slat (nsec): min=8315, max=54849, avg=12412.01, stdev=2941.30 00:10:51.405 clat (usec): min=124, max=2150, avg=195.48, stdev=66.98 00:10:51.405 lat (usec): min=143, max=2163, avg=207.89, stdev=67.65 00:10:51.405 clat percentiles (usec): 00:10:51.405 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:10:51.405 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 174], 00:10:51.405 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:51.405 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 392], 99.95th=[ 799], 00:10:51.405 | 99.99th=[ 2147] 00:10:51.405 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:51.405 slat (nsec): min=12943, max=97995, avg=20967.44, stdev=5487.54 00:10:51.405 clat (usec): min=95, max=289, avg=160.65, stdev=45.46 00:10:51.405 lat (usec): min=112, max=373, avg=181.62, stdev=49.67 00:10:51.405 clat percentiles (usec): 00:10:51.405 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 121], 00:10:51.405 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 137], 60.00th=[ 172], 00:10:51.405 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 235], 00:10:51.406 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 277], 00:10:51.406 | 99.99th=[ 289] 00:10:51.406 bw ( KiB/s): min= 8192, max= 8192, per=18.20%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.406 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.406 lat (usec) : 100=0.14%, 250=85.35%, 500=14.47%, 1000=0.02% 00:10:51.406 lat (msec) : 4=0.02% 00:10:51.406 cpu : usr=1.80%, sys=7.20%, ctx=5093, majf=0, minf=9 00:10:51.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 issued rwts: total=2532,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.406 job2: (groupid=0, jobs=1): err= 0: pid=80019: Mon Dec 16 14:26:43 2024 00:10:51.406 read: IOPS=2642, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:10:51.406 slat (usec): min=11, max=204, avg=15.86, stdev= 6.90 00:10:51.406 clat (usec): min=5, max=396, avg=176.38, stdev=15.39 00:10:51.406 lat (usec): min=159, max=408, avg=192.24, stdev=17.71 00:10:51.406 clat percentiles (usec): 00:10:51.406 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:10:51.406 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:51.406 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:10:51.406 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 334], 99.95th=[ 383], 00:10:51.406 | 99.99th=[ 396] 00:10:51.406 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:51.406 slat (nsec): min=13583, max=96506, avg=20605.77, stdev=6113.57 00:10:51.406 clat (usec): min=103, max=2436, avg=136.20, stdev=43.95 00:10:51.406 lat (usec): min=121, max=2496, avg=156.81, stdev=45.10 00:10:51.406 clat percentiles (usec): 00:10:51.406 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 126], 00:10:51.406 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:10:51.406 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:10:51.406 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 229], 99.95th=[ 465], 00:10:51.406 | 99.99th=[ 2442] 00:10:51.406 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:10:51.406 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:51.406 lat (usec) : 10=0.02%, 250=99.77%, 500=0.19% 00:10:51.406 lat (msec) : 4=0.02% 00:10:51.406 cpu : usr=2.30%, sys=8.30%, ctx=5717, majf=0, minf=13 00:10:51.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 issued rwts: total=2645,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.406 job3: (groupid=0, jobs=1): err= 0: pid=80020: Mon Dec 16 14:26:43 2024 00:10:51.406 read: IOPS=2214, BW=8859KiB/s (9072kB/s)(8868KiB/1001msec) 00:10:51.406 slat (nsec): min=8469, max=46542, avg=14402.90, stdev=4684.04 00:10:51.406 clat (usec): min=151, max=2081, avg=212.48, stdev=63.37 00:10:51.406 lat (usec): min=163, max=2101, avg=226.89, stdev=64.98 00:10:51.406 clat percentiles (usec): 00:10:51.406 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:10:51.406 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 200], 00:10:51.406 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:10:51.406 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 807], 99.95th=[ 1012], 00:10:51.406 | 99.99th=[ 2089] 00:10:51.406 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:51.406 slat (nsec): min=12636, max=70945, avg=22367.51, stdev=8830.50 00:10:51.406 clat (usec): min=118, max=273, avg=168.51, stdev=36.92 00:10:51.406 lat (usec): min=135, max=305, avg=190.88, stdev=42.11 00:10:51.406 clat percentiles (usec): 00:10:51.406 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:10:51.406 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 180], 00:10:51.406 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 229], 00:10:51.406 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 260], 99.95th=[ 269], 00:10:51.406 | 99.99th=[ 273] 00:10:51.406 bw ( KiB/s): min= 8192, max= 8192, per=18.20%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.406 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.406 lat (usec) : 250=85.33%, 500=14.59%, 750=0.02%, 1000=0.02% 00:10:51.406 lat (msec) : 2=0.02%, 4=0.02% 00:10:51.406 cpu : usr=1.80%, sys=7.60%, ctx=4778, majf=0, minf=13 00:10:51.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.406 issued rwts: total=2217,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.406 00:10:51.406 Run status group 0 (all jobs): 00:10:51.406 READ: bw=40.0MiB/s (42.0MB/s), 8859KiB/s-11.2MiB/s (9072kB/s-11.7MB/s), io=40.1MiB (42.0MB), run=1001-1001msec 00:10:51.406 WRITE: bw=44.0MiB/s (46.1MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:10:51.406 00:10:51.406 Disk stats (read/write): 00:10:51.406 nvme0n1: ios=2609/2560, merge=0/0, ticks=472/357, in_queue=829, util=88.88% 00:10:51.406 nvme0n2: ios=2097/2250, merge=0/0, ticks=456/366, in_queue=822, util=89.71% 00:10:51.406 nvme0n3: ios=2421/2560, merge=0/0, ticks=470/358, in_queue=828, util=89.66% 00:10:51.406 nvme0n4: ios=1983/2048, merge=0/0, ticks=432/354, in_queue=786, util=89.80% 00:10:51.406 14:26:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:51.406 [global] 00:10:51.406 thread=1 00:10:51.406 invalidate=1 00:10:51.406 rw=write 00:10:51.406 time_based=1 00:10:51.406 runtime=1 00:10:51.406 ioengine=libaio 00:10:51.406 direct=1 00:10:51.406 bs=4096 00:10:51.406 iodepth=128 00:10:51.406 norandommap=0 00:10:51.406 numjobs=1 00:10:51.406 00:10:51.406 verify_dump=1 00:10:51.406 verify_backlog=512 00:10:51.406 verify_state_save=0 00:10:51.406 do_verify=1 00:10:51.406 verify=crc32c-intel 00:10:51.406 [job0] 00:10:51.406 filename=/dev/nvme0n1 00:10:51.406 [job1] 00:10:51.406 filename=/dev/nvme0n2 00:10:51.406 [job2] 00:10:51.406 filename=/dev/nvme0n3 00:10:51.406 [job3] 00:10:51.406 filename=/dev/nvme0n4 00:10:51.406 Could not set queue depth (nvme0n1) 00:10:51.406 Could not set queue depth (nvme0n2) 00:10:51.406 Could not set queue depth (nvme0n3) 00:10:51.406 Could not set queue depth (nvme0n4) 00:10:51.406 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.406 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.406 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.406 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.406 fio-3.35 00:10:51.406 Starting 4 threads 00:10:52.779 00:10:52.779 job0: (groupid=0, jobs=1): err= 0: pid=80073: Mon Dec 16 14:26:44 2024 00:10:52.779 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:52.779 slat (usec): min=4, max=3451, avg=84.82, stdev=400.34 00:10:52.779 clat (usec): min=8434, max=12496, avg=11333.62, stdev=489.42 00:10:52.779 lat (usec): min=10571, max=12516, avg=11418.44, stdev=293.61 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[ 8979], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:10:52.779 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:10:52.779 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:10:52.779 | 99.00th=[12387], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:10:52.779 | 99.99th=[12518] 00:10:52.779 write: IOPS=5845, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1002msec); 0 zone resets 00:10:52.779 slat (usec): min=9, max=2531, avg=82.18, stdev=345.86 00:10:52.779 clat (usec): min=256, max=12151, avg=10724.73, stdev=889.20 00:10:52.779 lat (usec): min=2208, max=12185, avg=10806.91, stdev=816.84 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[ 5473], 5.00th=[10159], 10.00th=[10421], 20.00th=[10552], 00:10:52.779 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:10:52.779 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11207], 95.00th=[11338], 00:10:52.779 | 99.00th=[11469], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:10:52.779 | 99.99th=[12125] 00:10:52.779 bw ( KiB/s): min=21256, max=24576, per=34.64%, avg=22916.00, stdev=2347.59, samples=2 00:10:52.779 iops : min= 5314, max= 6144, avg=5729.00, stdev=586.90, samples=2 00:10:52.779 lat (usec) : 500=0.01% 00:10:52.779 lat (msec) : 4=0.28%, 10=3.65%, 20=96.07% 00:10:52.779 cpu : usr=5.39%, sys=13.99%, ctx=364, majf=0, minf=1 00:10:52.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:52.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.779 issued rwts: total=5632,5857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.779 job1: (groupid=0, jobs=1): err= 0: pid=80074: Mon Dec 16 14:26:44 2024 00:10:52.779 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:52.779 slat (usec): min=5, max=6683, avg=156.92, stdev=811.92 00:10:52.779 clat (usec): min=12294, max=29202, avg=20271.06, stdev=4220.60 00:10:52.779 lat (usec): min=14766, max=29225, avg=20427.98, stdev=4181.86 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[14091], 5.00th=[15401], 10.00th=[16581], 20.00th=[17695], 00:10:52.779 | 30.00th=[17957], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:10:52.779 | 70.00th=[21890], 80.00th=[24511], 90.00th=[28443], 95.00th=[28967], 00:10:52.779 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:10:52.779 | 99.99th=[29230] 00:10:52.779 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1004msec); 0 zone resets 00:10:52.779 slat (usec): min=10, max=10558, avg=138.62, stdev=691.03 00:10:52.779 clat (usec): min=733, max=33602, avg=17999.44, stdev=4990.02 00:10:52.779 lat (usec): min=3307, max=33618, avg=18138.06, stdev=4970.48 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[ 7177], 5.00th=[13566], 10.00th=[13829], 20.00th=[14091], 00:10:52.779 | 30.00th=[14484], 40.00th=[14746], 50.00th=[17171], 60.00th=[19268], 00:10:52.779 | 70.00th=[19530], 80.00th=[19792], 90.00th=[26084], 95.00th=[28443], 00:10:52.779 | 99.00th=[32637], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:10:52.779 | 99.99th=[33817] 00:10:52.779 bw ( KiB/s): min=12312, max=15112, per=20.73%, avg=13712.00, stdev=1979.90, samples=2 00:10:52.779 iops : min= 3078, max= 3778, avg=3428.00, stdev=494.97, samples=2 00:10:52.779 lat (usec) : 750=0.02% 00:10:52.779 lat (msec) : 4=0.44%, 10=0.53%, 20=73.28%, 50=25.74% 00:10:52.779 cpu : usr=2.89%, sys=10.07%, ctx=238, majf=0, minf=6 00:10:52.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:52.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.779 issued rwts: total=3072,3553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.779 job2: (groupid=0, jobs=1): err= 0: pid=80075: Mon Dec 16 14:26:44 2024 00:10:52.779 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:10:52.779 slat (usec): min=6, max=9651, avg=207.75, stdev=878.07 00:10:52.779 clat (usec): min=14374, max=44159, avg=25446.27, stdev=4281.18 00:10:52.779 lat (usec): min=14392, max=44180, avg=25654.02, stdev=4363.29 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[18744], 5.00th=[19792], 10.00th=[21103], 20.00th=[21627], 00:10:52.779 | 30.00th=[21627], 40.00th=[23725], 50.00th=[25035], 60.00th=[26870], 00:10:52.779 | 70.00th=[28705], 80.00th=[28967], 90.00th=[29492], 95.00th=[31851], 00:10:52.779 | 99.00th=[38536], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:10:52.779 | 99.99th=[44303] 00:10:52.779 write: IOPS=2093, BW=8374KiB/s (8575kB/s)(8424KiB/1006msec); 0 zone resets 00:10:52.779 slat (usec): min=8, max=9697, avg=264.13, stdev=870.39 00:10:52.779 clat (usec): min=2673, max=57854, avg=35261.08, stdev=11263.77 00:10:52.779 lat (usec): min=5552, max=57878, avg=35525.21, stdev=11322.23 00:10:52.779 clat percentiles (usec): 00:10:52.779 | 1.00th=[ 9241], 5.00th=[19268], 10.00th=[21103], 20.00th=[22152], 00:10:52.779 | 30.00th=[29492], 40.00th=[34866], 50.00th=[35914], 60.00th=[36439], 00:10:52.779 | 70.00th=[41157], 80.00th=[45876], 90.00th=[51119], 95.00th=[53740], 00:10:52.779 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:10:52.779 | 99.99th=[57934] 00:10:52.779 bw ( KiB/s): min= 7600, max= 8801, per=12.40%, avg=8200.50, stdev=849.24, samples=2 00:10:52.779 iops : min= 1900, max= 2200, avg=2050.00, stdev=212.13, samples=2 00:10:52.779 lat (msec) : 4=0.02%, 10=1.06%, 20=5.37%, 50=87.82%, 100=5.73% 00:10:52.779 cpu : usr=2.49%, sys=7.06%, ctx=302, majf=0, minf=9 00:10:52.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:52.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.779 issued rwts: total=2048,2106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.779 job3: (groupid=0, jobs=1): err= 0: pid=80076: Mon Dec 16 14:26:44 2024 00:10:52.779 read: IOPS=4722, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1003msec) 00:10:52.780 slat (usec): min=5, max=3244, avg=99.58, stdev=473.75 00:10:52.780 clat (usec): min=303, max=13941, avg=13047.98, stdev=1152.98 00:10:52.780 lat (usec): min=2953, max=13966, avg=13147.56, stdev=1053.63 00:10:52.780 clat percentiles (usec): 00:10:52.780 | 1.00th=[ 6652], 5.00th=[11076], 10.00th=[12780], 20.00th=[12911], 00:10:52.780 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:10:52.780 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13698], 00:10:52.780 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:10:52.780 | 99.99th=[13960] 00:10:52.780 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:52.780 slat (usec): min=8, max=4279, avg=96.29, stdev=422.95 00:10:52.780 clat (usec): min=9272, max=13990, avg=12657.85, stdev=579.20 00:10:52.780 lat (usec): min=10401, max=14015, avg=12754.14, stdev=393.02 00:10:52.780 clat percentiles (usec): 00:10:52.780 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:52.780 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:10:52.780 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:10:52.780 | 99.00th=[13829], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960], 00:10:52.780 | 99.99th=[13960] 00:10:52.780 bw ( KiB/s): min=20480, max=20521, per=30.99%, avg=20500.50, stdev=28.99, samples=2 00:10:52.780 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:52.780 lat (usec) : 500=0.01% 00:10:52.780 lat (msec) : 4=0.32%, 10=0.72%, 20=98.94% 00:10:52.780 cpu : usr=4.59%, sys=12.77%, ctx=309, majf=0, minf=1 00:10:52.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:52.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.780 issued rwts: total=4737,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.780 00:10:52.780 Run status group 0 (all jobs): 00:10:52.780 READ: bw=60.1MiB/s (63.1MB/s), 8143KiB/s-22.0MiB/s (8339kB/s-23.0MB/s), io=60.5MiB (63.4MB), run=1002-1006msec 00:10:52.780 WRITE: bw=64.6MiB/s (67.7MB/s), 8374KiB/s-22.8MiB/s (8575kB/s-23.9MB/s), io=65.0MiB (68.1MB), run=1002-1006msec 00:10:52.780 00:10:52.780 Disk stats (read/write): 00:10:52.780 nvme0n1: ios=4850/5120, merge=0/0, ticks=12166/11732, in_queue=23898, util=88.38% 00:10:52.780 nvme0n2: ios=2765/3072, merge=0/0, ticks=13039/12117, in_queue=25156, util=88.68% 00:10:52.780 nvme0n3: ios=1536/1967, merge=0/0, ticks=12930/22099, in_queue=35029, util=89.20% 00:10:52.780 nvme0n4: ios=4096/4416, merge=0/0, ticks=12104/12400, in_queue=24504, util=89.75% 00:10:52.780 14:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:52.780 [global] 00:10:52.780 thread=1 00:10:52.780 invalidate=1 00:10:52.780 rw=randwrite 00:10:52.780 time_based=1 00:10:52.780 runtime=1 00:10:52.780 ioengine=libaio 00:10:52.780 direct=1 00:10:52.780 bs=4096 00:10:52.780 iodepth=128 00:10:52.780 norandommap=0 00:10:52.780 numjobs=1 00:10:52.780 00:10:52.780 verify_dump=1 00:10:52.780 verify_backlog=512 00:10:52.780 verify_state_save=0 00:10:52.780 do_verify=1 00:10:52.780 verify=crc32c-intel 00:10:52.780 [job0] 00:10:52.780 filename=/dev/nvme0n1 00:10:52.780 [job1] 00:10:52.780 filename=/dev/nvme0n2 00:10:52.780 [job2] 00:10:52.780 filename=/dev/nvme0n3 00:10:52.780 [job3] 00:10:52.780 filename=/dev/nvme0n4 00:10:52.780 Could not set queue depth (nvme0n1) 00:10:52.780 Could not set queue depth (nvme0n2) 00:10:52.780 Could not set queue depth (nvme0n3) 00:10:52.780 Could not set queue depth (nvme0n4) 00:10:52.780 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.780 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.780 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.780 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.780 fio-3.35 00:10:52.780 Starting 4 threads 00:10:54.151 00:10:54.151 job0: (groupid=0, jobs=1): err= 0: pid=80143: Mon Dec 16 14:26:46 2024 00:10:54.151 read: IOPS=5606, BW=21.9MiB/s (23.0MB/s)(21.9MiB/1002msec) 00:10:54.151 slat (usec): min=6, max=5590, avg=87.23, stdev=402.18 00:10:54.151 clat (usec): min=1594, max=17262, avg=11315.90, stdev=1242.45 00:10:54.151 lat (usec): min=1606, max=17327, avg=11403.13, stdev=1264.94 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[ 7046], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10945], 00:10:54.151 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:10:54.151 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[13173], 00:10:54.151 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16057], 99.95th=[16057], 00:10:54.151 | 99.99th=[17171] 00:10:54.151 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:54.151 slat (usec): min=10, max=10339, avg=83.05, stdev=486.91 00:10:54.151 clat (usec): min=5144, max=38275, avg=11182.11, stdev=3472.79 00:10:54.151 lat (usec): min=5165, max=40257, avg=11265.16, stdev=3513.35 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[ 5276], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10159], 00:10:54.151 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:10:54.151 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[13960], 00:10:54.151 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:10:54.151 | 99.99th=[38536] 00:10:54.151 bw ( KiB/s): min=22160, max=22941, per=34.72%, avg=22550.50, stdev=552.25, samples=2 00:10:54.151 iops : min= 5540, max= 5735, avg=5637.50, stdev=137.89, samples=2 00:10:54.151 lat (msec) : 2=0.11%, 4=0.19%, 10=10.56%, 20=87.46%, 50=1.69% 00:10:54.151 cpu : usr=5.19%, sys=13.99%, ctx=396, majf=0, minf=7 00:10:54.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:54.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.151 issued rwts: total=5618,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.151 job1: (groupid=0, jobs=1): err= 0: pid=80144: Mon Dec 16 14:26:46 2024 00:10:54.151 read: IOPS=2937, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1008msec) 00:10:54.151 slat (usec): min=7, max=10086, avg=155.71, stdev=792.27 00:10:54.151 clat (usec): min=2713, max=64911, avg=19607.67, stdev=6208.28 00:10:54.151 lat (usec): min=9191, max=64938, avg=19763.37, stdev=6253.87 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[11863], 5.00th=[14222], 10.00th=[14615], 20.00th=[16057], 00:10:54.151 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16712], 60.00th=[17957], 00:10:54.151 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[29230], 00:10:54.151 | 99.00th=[51643], 99.50th=[58459], 99.90th=[64750], 99.95th=[64750], 00:10:54.151 | 99.99th=[64750] 00:10:54.151 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:10:54.151 slat (usec): min=10, max=11287, avg=169.02, stdev=883.62 00:10:54.151 clat (usec): min=8947, max=79354, avg=22685.72, stdev=17118.14 00:10:54.151 lat (usec): min=8965, max=79375, avg=22854.74, stdev=17237.59 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[11863], 20.00th=[12125], 00:10:54.151 | 30.00th=[12387], 40.00th=[13173], 50.00th=[14353], 60.00th=[16319], 00:10:54.151 | 70.00th=[21890], 80.00th=[32637], 90.00th=[46924], 95.00th=[64226], 00:10:54.151 | 99.00th=[79168], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168], 00:10:54.151 | 99.99th=[79168] 00:10:54.151 bw ( KiB/s): min= 8192, max=16384, per=18.92%, avg=12288.00, stdev=5792.62, samples=2 00:10:54.151 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:54.151 lat (msec) : 4=0.02%, 10=0.88%, 20=65.79%, 50=28.08%, 100=5.24% 00:10:54.151 cpu : usr=2.78%, sys=8.44%, ctx=248, majf=0, minf=13 00:10:54.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:54.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.151 issued rwts: total=2961,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.151 job2: (groupid=0, jobs=1): err= 0: pid=80145: Mon Dec 16 14:26:46 2024 00:10:54.151 read: IOPS=2154, BW=8618KiB/s (8825kB/s)(8696KiB/1009msec) 00:10:54.151 slat (usec): min=8, max=15520, avg=187.67, stdev=919.06 00:10:54.151 clat (usec): min=3790, max=69116, avg=21988.37, stdev=8066.11 00:10:54.151 lat (usec): min=9651, max=69155, avg=22176.04, stdev=8136.17 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[10028], 5.00th=[13960], 10.00th=[16319], 20.00th=[17171], 00:10:54.151 | 30.00th=[17433], 40.00th=[17433], 50.00th=[19006], 60.00th=[23987], 00:10:54.151 | 70.00th=[24249], 80.00th=[25035], 90.00th=[28443], 95.00th=[34866], 00:10:54.151 | 99.00th=[64226], 99.50th=[66847], 99.90th=[67634], 99.95th=[67634], 00:10:54.151 | 99.99th=[68682] 00:10:54.151 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:10:54.151 slat (usec): min=10, max=10244, avg=224.68, stdev=969.05 00:10:54.151 clat (usec): min=8094, max=75089, avg=30965.04, stdev=19534.25 00:10:54.151 lat (usec): min=8140, max=75121, avg=31189.73, stdev=19662.31 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[11076], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:10:54.151 | 30.00th=[14222], 40.00th=[18220], 50.00th=[21103], 60.00th=[31327], 00:10:54.151 | 70.00th=[43254], 80.00th=[51119], 90.00th=[61604], 95.00th=[69731], 00:10:54.151 | 99.00th=[72877], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:10:54.151 | 99.99th=[74974] 00:10:54.151 bw ( KiB/s): min= 7352, max=13138, per=15.77%, avg=10245.00, stdev=4091.32, samples=2 00:10:54.151 iops : min= 1838, max= 3284, avg=2561.00, stdev=1022.48, samples=2 00:10:54.151 lat (msec) : 4=0.02%, 10=0.44%, 20=49.24%, 50=38.02%, 100=12.27% 00:10:54.151 cpu : usr=2.08%, sys=8.13%, ctx=245, majf=0, minf=13 00:10:54.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:54.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.151 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.151 job3: (groupid=0, jobs=1): err= 0: pid=80146: Mon Dec 16 14:26:46 2024 00:10:54.151 read: IOPS=4914, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:10:54.151 slat (usec): min=4, max=5705, avg=101.61, stdev=490.75 00:10:54.151 clat (usec): min=578, max=18505, avg=12789.37, stdev=1747.33 00:10:54.151 lat (usec): min=1849, max=22077, avg=12890.98, stdev=1778.48 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[ 6128], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[12256], 00:10:54.151 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:54.151 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13960], 95.00th=[16057], 00:10:54.151 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:10:54.151 | 99.99th=[18482] 00:10:54.151 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:54.151 slat (usec): min=7, max=5270, avg=89.56, stdev=451.50 00:10:54.151 clat (usec): min=5423, max=18604, avg=12388.61, stdev=1476.66 00:10:54.151 lat (usec): min=5451, max=18623, avg=12478.17, stdev=1531.94 00:10:54.151 clat percentiles (usec): 00:10:54.151 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:10:54.151 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:10:54.151 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[15139], 00:10:54.151 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:10:54.151 | 99.99th=[18482] 00:10:54.151 bw ( KiB/s): min=20480, max=20480, per=31.53%, avg=20480.00, stdev= 0.00, samples=1 00:10:54.151 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:54.151 lat (usec) : 750=0.01% 00:10:54.151 lat (msec) : 2=0.08%, 4=0.13%, 10=4.78%, 20=95.00% 00:10:54.151 cpu : usr=5.00%, sys=14.19%, ctx=438, majf=0, minf=13 00:10:54.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:54.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.151 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.151 00:10:54.151 Run status group 0 (all jobs): 00:10:54.151 READ: bw=60.7MiB/s (63.6MB/s), 8618KiB/s-21.9MiB/s (8825kB/s-23.0MB/s), io=61.2MiB (64.2MB), run=1002-1009msec 00:10:54.151 WRITE: bw=63.4MiB/s (66.5MB/s), 9.91MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1002-1009msec 00:10:54.151 00:10:54.151 Disk stats (read/write): 00:10:54.151 nvme0n1: ios=4657/5039, merge=0/0, ticks=25092/24773, in_queue=49865, util=88.26% 00:10:54.151 nvme0n2: ios=2286/2560, merge=0/0, ticks=21771/29611, in_queue=51382, util=88.13% 00:10:54.151 nvme0n3: ios=2048/2279, merge=0/0, ticks=22606/28305, in_queue=50911, util=89.16% 00:10:54.151 nvme0n4: ios=4096/4487, merge=0/0, ticks=25076/23889, in_queue=48965, util=88.89% 00:10:54.152 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:54.152 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80159 00:10:54.152 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:54.152 14:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:54.152 [global] 00:10:54.152 thread=1 00:10:54.152 invalidate=1 00:10:54.152 rw=read 00:10:54.152 time_based=1 00:10:54.152 runtime=10 00:10:54.152 ioengine=libaio 00:10:54.152 direct=1 00:10:54.152 bs=4096 00:10:54.152 iodepth=1 00:10:54.152 norandommap=1 00:10:54.152 numjobs=1 00:10:54.152 00:10:54.152 [job0] 00:10:54.152 filename=/dev/nvme0n1 00:10:54.152 [job1] 00:10:54.152 filename=/dev/nvme0n2 00:10:54.152 [job2] 00:10:54.152 filename=/dev/nvme0n3 00:10:54.152 [job3] 00:10:54.152 filename=/dev/nvme0n4 00:10:54.152 Could not set queue depth (nvme0n1) 00:10:54.152 Could not set queue depth (nvme0n2) 00:10:54.152 Could not set queue depth (nvme0n3) 00:10:54.152 Could not set queue depth (nvme0n4) 00:10:54.152 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.152 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.152 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.152 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.152 fio-3.35 00:10:54.152 Starting 4 threads 00:10:57.432 14:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:57.432 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=47583232, buflen=4096 00:10:57.432 fio: pid=80202, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.432 14:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:57.691 fio: pid=80201, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.691 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69042176, buflen=4096 00:10:57.691 14:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.691 14:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:57.949 fio: pid=80199, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.949 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17903616, buflen=4096 00:10:57.949 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.949 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:58.208 fio: pid=80200, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.208 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1114112, buflen=4096 00:10:58.208 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.208 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:58.208 00:10:58.208 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80199: Mon Dec 16 14:26:50 2024 00:10:58.208 read: IOPS=5716, BW=22.3MiB/s (23.4MB/s)(81.1MiB/3631msec) 00:10:58.208 slat (usec): min=10, max=13491, avg=15.02, stdev=160.37 00:10:58.208 clat (usec): min=118, max=2169, avg=158.69, stdev=29.21 00:10:58.208 lat (usec): min=129, max=13660, avg=173.71, stdev=163.13 00:10:58.208 clat percentiles (usec): 00:10:58.208 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:58.208 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:10:58.208 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:10:58.208 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 233], 99.95th=[ 562], 00:10:58.208 | 99.99th=[ 1598] 00:10:58.208 bw ( KiB/s): min=21952, max=23608, per=34.34%, avg=22934.29, stdev=652.83, samples=7 00:10:58.208 iops : min= 5488, max= 5902, avg=5733.57, stdev=163.21, samples=7 00:10:58.208 lat (usec) : 250=99.91%, 500=0.02%, 750=0.03% 00:10:58.208 lat (msec) : 2=0.02%, 4=0.01% 00:10:58.208 cpu : usr=1.68%, sys=6.47%, ctx=20762, majf=0, minf=1 00:10:58.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 issued rwts: total=20756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.208 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80200: Mon Dec 16 14:26:50 2024 00:10:58.208 read: IOPS=4221, BW=16.5MiB/s (17.3MB/s)(65.1MiB/3946msec) 00:10:58.208 slat (usec): min=7, max=17440, avg=16.47, stdev=223.38 00:10:58.208 clat (usec): min=118, max=5969, avg=219.30, stdev=119.97 00:10:58.208 lat (usec): min=129, max=17629, avg=235.78, stdev=253.20 00:10:58.208 clat percentiles (usec): 00:10:58.208 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 157], 00:10:58.208 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 241], 60.00th=[ 251], 00:10:58.208 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:58.208 | 99.00th=[ 310], 99.50th=[ 371], 99.90th=[ 1516], 99.95th=[ 3392], 00:10:58.208 | 99.99th=[ 4490] 00:10:58.208 bw ( KiB/s): min=14448, max=20880, per=24.65%, avg=16463.29, stdev=3005.78, samples=7 00:10:58.208 iops : min= 3612, max= 5220, avg=4115.71, stdev=751.26, samples=7 00:10:58.208 lat (usec) : 250=58.34%, 500=41.35%, 750=0.11%, 1000=0.06% 00:10:58.208 lat (msec) : 2=0.06%, 4=0.05%, 10=0.02% 00:10:58.208 cpu : usr=1.12%, sys=4.79%, ctx=16665, majf=0, minf=2 00:10:58.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 issued rwts: total=16657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.208 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80201: Mon Dec 16 14:26:50 2024 00:10:58.208 read: IOPS=5117, BW=20.0MiB/s (21.0MB/s)(65.8MiB/3294msec) 00:10:58.208 slat (usec): min=10, max=10499, avg=14.39, stdev=100.63 00:10:58.208 clat (usec): min=142, max=3309, avg=179.85, stdev=40.46 00:10:58.208 lat (usec): min=154, max=10709, avg=194.24, stdev=108.81 00:10:58.208 clat percentiles (usec): 00:10:58.208 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:58.208 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:58.208 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 206], 00:10:58.208 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 367], 99.95th=[ 545], 00:10:58.208 | 99.99th=[ 2147] 00:10:58.208 bw ( KiB/s): min=20296, max=20824, per=30.83%, avg=20593.33, stdev=209.62, samples=6 00:10:58.208 iops : min= 5074, max= 5206, avg=5148.33, stdev=52.40, samples=6 00:10:58.208 lat (usec) : 250=99.82%, 500=0.09%, 750=0.05% 00:10:58.208 lat (msec) : 2=0.01%, 4=0.02% 00:10:58.208 cpu : usr=1.46%, sys=5.74%, ctx=16860, majf=0, minf=1 00:10:58.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 issued rwts: total=16857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.208 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=80202: Mon Dec 16 14:26:50 2024 00:10:58.208 read: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(45.4MiB/2984msec) 00:10:58.208 slat (nsec): min=7689, max=74744, avg=12553.99, stdev=3344.38 00:10:58.208 clat (usec): min=149, max=2080, avg=243.07, stdev=44.39 00:10:58.208 lat (usec): min=161, max=2096, avg=255.63, stdev=43.60 00:10:58.208 clat percentiles (usec): 00:10:58.208 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 194], 00:10:58.208 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:10:58.208 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:58.208 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 529], 99.95th=[ 627], 00:10:58.208 | 99.99th=[ 963] 00:10:58.208 bw ( KiB/s): min=14440, max=20328, per=23.66%, avg=15803.20, stdev=2535.78, samples=5 00:10:58.208 iops : min= 3610, max= 5082, avg=3950.80, stdev=633.94, samples=5 00:10:58.208 lat (usec) : 250=43.44%, 500=56.39%, 750=0.14%, 1000=0.02% 00:10:58.208 lat (msec) : 4=0.01% 00:10:58.208 cpu : usr=0.94%, sys=4.59%, ctx=11619, majf=0, minf=2 00:10:58.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.208 issued rwts: total=11618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.208 00:10:58.208 Run status group 0 (all jobs): 00:10:58.208 READ: bw=65.2MiB/s (68.4MB/s), 15.2MiB/s-22.3MiB/s (15.9MB/s-23.4MB/s), io=257MiB (270MB), run=2984-3946msec 00:10:58.208 00:10:58.208 Disk stats (read/write): 00:10:58.208 nvme0n1: ios=20700/0, merge=0/0, ticks=3325/0, in_queue=3325, util=95.46% 00:10:58.208 nvme0n2: ios=16185/0, merge=0/0, ticks=3513/0, in_queue=3513, util=95.16% 00:10:58.208 nvme0n3: ios=15928/0, merge=0/0, ticks=2925/0, in_queue=2925, util=96.33% 00:10:58.208 nvme0n4: ios=11204/0, merge=0/0, ticks=2675/0, in_queue=2675, util=96.76% 00:10:58.467 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.467 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:58.725 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.725 14:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:58.983 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.983 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:59.242 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.242 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 80159 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.809 nvmf hotplug test: fio failed as expected 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:59.809 14:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.068 rmmod nvme_tcp 00:11:00.068 rmmod nvme_fabrics 00:11:00.068 rmmod nvme_keyring 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 79779 ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 79779 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 79779 ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 79779 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79779 00:11:00.068 killing process with pid 79779 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79779' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 79779 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 79779 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.068 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:00.327 00:11:00.327 real 0m19.493s 00:11:00.327 user 1m12.829s 00:11:00.327 sys 0m10.260s 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.327 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.327 ************************************ 00:11:00.327 END TEST nvmf_fio_target 00:11:00.327 ************************************ 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.586 ************************************ 00:11:00.586 START TEST nvmf_bdevio 00:11:00.586 ************************************ 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.586 * Looking for test storage... 00:11:00.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.586 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.587 --rc genhtml_branch_coverage=1 00:11:00.587 --rc genhtml_function_coverage=1 00:11:00.587 --rc genhtml_legend=1 00:11:00.587 --rc geninfo_all_blocks=1 00:11:00.587 --rc geninfo_unexecuted_blocks=1 00:11:00.587 00:11:00.587 ' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.587 --rc genhtml_branch_coverage=1 00:11:00.587 --rc genhtml_function_coverage=1 00:11:00.587 --rc genhtml_legend=1 00:11:00.587 --rc geninfo_all_blocks=1 00:11:00.587 --rc geninfo_unexecuted_blocks=1 00:11:00.587 00:11:00.587 ' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.587 --rc genhtml_branch_coverage=1 00:11:00.587 --rc genhtml_function_coverage=1 00:11:00.587 --rc genhtml_legend=1 00:11:00.587 --rc geninfo_all_blocks=1 00:11:00.587 --rc geninfo_unexecuted_blocks=1 00:11:00.587 00:11:00.587 ' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.587 --rc genhtml_branch_coverage=1 00:11:00.587 --rc genhtml_function_coverage=1 00:11:00.587 --rc genhtml_legend=1 00:11:00.587 --rc geninfo_all_blocks=1 00:11:00.587 --rc geninfo_unexecuted_blocks=1 00:11:00.587 00:11:00.587 ' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.587 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.846 Cannot find device "nvmf_init_br" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.846 Cannot find device "nvmf_init_br2" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.846 Cannot find device "nvmf_tgt_br" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.846 Cannot find device "nvmf_tgt_br2" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.846 Cannot find device "nvmf_init_br" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.846 Cannot find device "nvmf_init_br2" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.846 Cannot find device "nvmf_tgt_br" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.846 Cannot find device "nvmf_tgt_br2" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.846 Cannot find device "nvmf_br" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.846 Cannot find device "nvmf_init_if" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.846 Cannot find device "nvmf_init_if2" 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:00.846 14:26:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:00.846 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:00.846 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:00.846 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:00.846 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:00.847 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:01.105 00:11:01.105 --- 10.0.0.3 ping statistics --- 00:11:01.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.105 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:01.105 00:11:01.105 --- 10.0.0.4 ping statistics --- 00:11:01.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.105 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:01.105 00:11:01.105 --- 10.0.0.1 ping statistics --- 00:11:01.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.105 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:01.105 00:11:01.105 --- 10.0.0.2 ping statistics --- 00:11:01.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.105 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=80521 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 80521 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 80521 ']' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.105 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.105 [2024-12-16 14:26:53.222483] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:01.105 [2024-12-16 14:26:53.222594] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.364 [2024-12-16 14:26:53.376106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.364 [2024-12-16 14:26:53.399605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.364 [2024-12-16 14:26:53.399687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.364 [2024-12-16 14:26:53.399699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.364 [2024-12-16 14:26:53.399708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.364 [2024-12-16 14:26:53.399715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.364 [2024-12-16 14:26:53.400762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:01.364 [2024-12-16 14:26:53.400835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:01.364 [2024-12-16 14:26:53.400881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:01.364 [2024-12-16 14:26:53.400886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.364 [2024-12-16 14:26:53.431769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.364 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.364 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:01.364 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.364 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.364 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 [2024-12-16 14:26:53.578485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 Malloc0 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.624 [2024-12-16 14:26:53.645820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.624 { 00:11:01.624 "params": { 00:11:01.624 "name": "Nvme$subsystem", 00:11:01.624 "trtype": "$TEST_TRANSPORT", 00:11:01.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.624 "adrfam": "ipv4", 00:11:01.624 "trsvcid": "$NVMF_PORT", 00:11:01.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.624 "hdgst": ${hdgst:-false}, 00:11:01.624 "ddgst": ${ddgst:-false} 00:11:01.624 }, 00:11:01.624 "method": "bdev_nvme_attach_controller" 00:11:01.624 } 00:11:01.624 EOF 00:11:01.624 )") 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:01.624 14:26:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:01.624 "params": { 00:11:01.624 "name": "Nvme1", 00:11:01.624 "trtype": "tcp", 00:11:01.624 "traddr": "10.0.0.3", 00:11:01.624 "adrfam": "ipv4", 00:11:01.624 "trsvcid": "4420", 00:11:01.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.624 "hdgst": false, 00:11:01.624 "ddgst": false 00:11:01.624 }, 00:11:01.624 "method": "bdev_nvme_attach_controller" 00:11:01.624 }' 00:11:01.624 [2024-12-16 14:26:53.704467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:01.624 [2024-12-16 14:26:53.704558] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80549 ] 00:11:01.908 [2024-12-16 14:26:53.860334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.908 [2024-12-16 14:26:53.886844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.908 [2024-12-16 14:26:53.886926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.908 [2024-12-16 14:26:53.886927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.908 [2024-12-16 14:26:53.928366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.908 I/O targets: 00:11:01.908 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.908 00:11:01.908 00:11:01.908 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.908 http://cunit.sourceforge.net/ 00:11:01.908 00:11:01.908 00:11:01.908 Suite: bdevio tests on: Nvme1n1 00:11:01.908 Test: blockdev write read block ...passed 00:11:01.908 Test: blockdev write zeroes read block ...passed 00:11:01.908 Test: blockdev write zeroes read no split ...passed 00:11:01.908 Test: blockdev write zeroes read split ...passed 00:11:01.908 Test: blockdev write zeroes read split partial ...passed 00:11:01.908 Test: blockdev reset ...[2024-12-16 14:26:54.059886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:01.908 [2024-12-16 14:26:54.059990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1b9d0 (9): Bad file descriptor 00:11:01.908 passed 00:11:01.908 Test: blockdev write read 8 blocks ...[2024-12-16 14:26:54.078167] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:01.908 passed 00:11:01.908 Test: blockdev write read size > 128k ...passed 00:11:01.908 Test: blockdev write read invalid size ...passed 00:11:01.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.908 Test: blockdev write read max offset ...passed 00:11:01.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.908 Test: blockdev writev readv 8 blocks ...passed 00:11:01.908 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.908 Test: blockdev writev readv block ...passed 00:11:01.908 Test: blockdev writev readv size > 128k ...passed 00:11:01.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.908 Test: blockdev comparev and writev ...[2024-12-16 14:26:54.085937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.085995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.086024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.086044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.908 passed 00:11:01.908 Test: blockdev nvme passthru rw ...[2024-12-16 14:26:54.086629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.086658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.086679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.086692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.086989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.087010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.087032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.087044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.087345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.087365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.087385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.908 [2024-12-16 14:26:54.087397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.908 passed 00:11:01.908 Test: blockdev nvme passthru vendor specific ...passed 00:11:01.908 Test: blockdev nvme admin passthru ...[2024-12-16 14:26:54.088226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.908 [2024-12-16 14:26:54.088255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.088365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.908 [2024-12-16 14:26:54.088384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.088516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.908 [2024-12-16 14:26:54.088535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.908 [2024-12-16 14:26:54.088649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.908 [2024-12-16 14:26:54.088668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:02.177 passed 00:11:02.177 Test: blockdev copy ...passed 00:11:02.177 00:11:02.177 Run Summary: Type Total Ran Passed Failed Inactive 00:11:02.177 suites 1 1 n/a 0 0 00:11:02.177 tests 23 23 23 0 0 00:11:02.177 asserts 152 152 152 0 n/a 00:11:02.177 00:11:02.177 Elapsed time = 0.143 seconds 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.177 rmmod nvme_tcp 00:11:02.177 rmmod nvme_fabrics 00:11:02.177 rmmod nvme_keyring 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 80521 ']' 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 80521 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 80521 ']' 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 80521 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80521 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:02.177 killing process with pid 80521 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80521' 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 80521 00:11:02.177 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 80521 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.436 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:02.695 00:11:02.695 real 0m2.241s 00:11:02.695 user 0m5.575s 00:11:02.695 sys 0m0.779s 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.695 ************************************ 00:11:02.695 END TEST nvmf_bdevio 00:11:02.695 ************************************ 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:02.695 00:11:02.695 real 2m26.308s 00:11:02.695 user 6m21.508s 00:11:02.695 sys 0m52.690s 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.695 ************************************ 00:11:02.695 END TEST nvmf_target_core 00:11:02.695 14:26:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.696 ************************************ 00:11:02.696 14:26:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.696 14:26:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.696 14:26:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.696 14:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.696 ************************************ 00:11:02.696 START TEST nvmf_target_extra 00:11:02.696 ************************************ 00:11:02.696 14:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.955 * Looking for test storage... 00:11:02.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:02.955 14:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.955 14:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.955 14:26:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.955 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.956 --rc genhtml_branch_coverage=1 00:11:02.956 --rc genhtml_function_coverage=1 00:11:02.956 --rc genhtml_legend=1 00:11:02.956 --rc geninfo_all_blocks=1 00:11:02.956 --rc geninfo_unexecuted_blocks=1 00:11:02.956 00:11:02.956 ' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.956 --rc genhtml_branch_coverage=1 00:11:02.956 --rc genhtml_function_coverage=1 00:11:02.956 --rc genhtml_legend=1 00:11:02.956 --rc geninfo_all_blocks=1 00:11:02.956 --rc geninfo_unexecuted_blocks=1 00:11:02.956 00:11:02.956 ' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.956 --rc genhtml_branch_coverage=1 00:11:02.956 --rc genhtml_function_coverage=1 00:11:02.956 --rc genhtml_legend=1 00:11:02.956 --rc geninfo_all_blocks=1 00:11:02.956 --rc geninfo_unexecuted_blocks=1 00:11:02.956 00:11:02.956 ' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.956 --rc genhtml_branch_coverage=1 00:11:02.956 --rc genhtml_function_coverage=1 00:11:02.956 --rc genhtml_legend=1 00:11:02.956 --rc geninfo_all_blocks=1 00:11:02.956 --rc geninfo_unexecuted_blocks=1 00:11:02.956 00:11:02.956 ' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.956 ************************************ 00:11:02.956 START TEST nvmf_auth_target 00:11:02.956 ************************************ 00:11:02.956 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:03.216 * Looking for test storage... 00:11:03.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.216 --rc genhtml_branch_coverage=1 00:11:03.216 --rc genhtml_function_coverage=1 00:11:03.216 --rc genhtml_legend=1 00:11:03.216 --rc geninfo_all_blocks=1 00:11:03.216 --rc geninfo_unexecuted_blocks=1 00:11:03.216 00:11:03.216 ' 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.216 --rc genhtml_branch_coverage=1 00:11:03.216 --rc genhtml_function_coverage=1 00:11:03.216 --rc genhtml_legend=1 00:11:03.216 --rc geninfo_all_blocks=1 00:11:03.216 --rc geninfo_unexecuted_blocks=1 00:11:03.216 00:11:03.216 ' 00:11:03.216 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.216 --rc genhtml_branch_coverage=1 00:11:03.216 --rc genhtml_function_coverage=1 00:11:03.216 --rc genhtml_legend=1 00:11:03.216 --rc geninfo_all_blocks=1 00:11:03.216 --rc geninfo_unexecuted_blocks=1 00:11:03.216 00:11:03.217 ' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.217 --rc genhtml_branch_coverage=1 00:11:03.217 --rc genhtml_function_coverage=1 00:11:03.217 --rc genhtml_legend=1 00:11:03.217 --rc geninfo_all_blocks=1 00:11:03.217 --rc geninfo_unexecuted_blocks=1 00:11:03.217 00:11:03.217 ' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.217 Cannot find device "nvmf_init_br" 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.217 Cannot find device "nvmf_init_br2" 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.217 Cannot find device "nvmf_tgt_br" 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.217 Cannot find device "nvmf_tgt_br2" 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.217 Cannot find device "nvmf_init_br" 00:11:03.217 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.218 Cannot find device "nvmf_init_br2" 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.218 Cannot find device "nvmf_tgt_br" 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.218 Cannot find device "nvmf_tgt_br2" 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:03.218 Cannot find device "nvmf_br" 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:03.218 Cannot find device "nvmf_init_if" 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:03.218 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:03.477 Cannot find device "nvmf_init_if2" 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.477 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:03.736 00:11:03.736 --- 10.0.0.3 ping statistics --- 00:11:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.736 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:03.736 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:03.736 00:11:03.736 --- 10.0.0.4 ping statistics --- 00:11:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.736 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:03.736 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:03.736 00:11:03.736 --- 10.0.0.1 ping statistics --- 00:11:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.736 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:03.736 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:11:03.736 00:11:03.736 --- 10.0.0.2 ping statistics --- 00:11:03.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.736 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:03.736 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.736 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80831 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80831 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80831 ']' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.737 14:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80856 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0f4645f470dbae6026d2f4a9f8c05be9e32abdc869300b3d 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tMP 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0f4645f470dbae6026d2f4a9f8c05be9e32abdc869300b3d 0 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0f4645f470dbae6026d2f4a9f8c05be9e32abdc869300b3d 0 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0f4645f470dbae6026d2f4a9f8c05be9e32abdc869300b3d 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tMP 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tMP 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.tMP 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5fc5ecff72c4df052d1b9471f7e28fd1850b8cfdfef880ef748d7aead09e471a 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.N14 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5fc5ecff72c4df052d1b9471f7e28fd1850b8cfdfef880ef748d7aead09e471a 3 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5fc5ecff72c4df052d1b9471f7e28fd1850b8cfdfef880ef748d7aead09e471a 3 00:11:03.996 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:03.997 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:03.997 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5fc5ecff72c4df052d1b9471f7e28fd1850b8cfdfef880ef748d7aead09e471a 00:11:03.997 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:03.997 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.N14 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.N14 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.N14 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=29f2bffc953ba28198f851db07247a39 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.z6K 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 29f2bffc953ba28198f851db07247a39 1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 29f2bffc953ba28198f851db07247a39 1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=29f2bffc953ba28198f851db07247a39 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.z6K 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.z6K 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.z6K 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dfa5daf5a40e55b6470b77f81d15c175daf427ba4ef408b4 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rVC 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dfa5daf5a40e55b6470b77f81d15c175daf427ba4ef408b4 2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dfa5daf5a40e55b6470b77f81d15c175daf427ba4ef408b4 2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dfa5daf5a40e55b6470b77f81d15c175daf427ba4ef408b4 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rVC 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rVC 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.rVC 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb2c460024a789314cb2c7eadc84599dc9b039b25d8156c1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UzL 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb2c460024a789314cb2c7eadc84599dc9b039b25d8156c1 2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb2c460024a789314cb2c7eadc84599dc9b039b25d8156c1 2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb2c460024a789314cb2c7eadc84599dc9b039b25d8156c1 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.256 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UzL 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UzL 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.UzL 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=29e5d45b55855c4941d3dab6f032ef31 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rJH 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 29e5d45b55855c4941d3dab6f032ef31 1 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 29e5d45b55855c4941d3dab6f032ef31 1 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=29e5d45b55855c4941d3dab6f032ef31 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:04.257 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rJH 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rJH 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.rJH 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d46c4a8b06a5383430e4a55d5c52a2ed3e1c33eab957399842979cc350ee5f5a 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O8J 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d46c4a8b06a5383430e4a55d5c52a2ed3e1c33eab957399842979cc350ee5f5a 3 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d46c4a8b06a5383430e4a55d5c52a2ed3e1c33eab957399842979cc350ee5f5a 3 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d46c4a8b06a5383430e4a55d5c52a2ed3e1c33eab957399842979cc350ee5f5a 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O8J 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O8J 00:11:04.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.O8J 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80831 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80831 ']' 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.516 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80856 /var/tmp/host.sock 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80856 ']' 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.775 14:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tMP 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tMP 00:11:05.034 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tMP 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.N14 ]] 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N14 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N14 00:11:05.293 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N14 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.z6K 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.551 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.z6K 00:11:05.552 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.z6K 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.rVC ]] 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rVC 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rVC 00:11:05.810 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rVC 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UzL 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UzL 00:11:06.068 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UzL 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.rJH ]] 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rJH 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rJH 00:11:06.326 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rJH 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.O8J 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.O8J 00:11:06.585 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.O8J 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.845 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.103 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.362 00:11:07.620 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.620 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.620 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.879 { 00:11:07.879 "cntlid": 1, 00:11:07.879 "qid": 0, 00:11:07.879 "state": "enabled", 00:11:07.879 "thread": "nvmf_tgt_poll_group_000", 00:11:07.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:07.879 "listen_address": { 00:11:07.879 "trtype": "TCP", 00:11:07.879 "adrfam": "IPv4", 00:11:07.879 "traddr": "10.0.0.3", 00:11:07.879 "trsvcid": "4420" 00:11:07.879 }, 00:11:07.879 "peer_address": { 00:11:07.879 "trtype": "TCP", 00:11:07.879 "adrfam": "IPv4", 00:11:07.879 "traddr": "10.0.0.1", 00:11:07.879 "trsvcid": "46206" 00:11:07.879 }, 00:11:07.879 "auth": { 00:11:07.879 "state": "completed", 00:11:07.879 "digest": "sha256", 00:11:07.879 "dhgroup": "null" 00:11:07.879 } 00:11:07.879 } 00:11:07.879 ]' 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.879 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.879 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:07.879 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.879 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.879 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.879 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.138 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:08.138 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.408 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.408 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.408 { 00:11:13.408 "cntlid": 3, 00:11:13.408 "qid": 0, 00:11:13.408 "state": "enabled", 00:11:13.408 "thread": "nvmf_tgt_poll_group_000", 00:11:13.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:13.408 "listen_address": { 00:11:13.408 "trtype": "TCP", 00:11:13.408 "adrfam": "IPv4", 00:11:13.408 "traddr": "10.0.0.3", 00:11:13.408 "trsvcid": "4420" 00:11:13.408 }, 00:11:13.408 "peer_address": { 00:11:13.408 "trtype": "TCP", 00:11:13.408 "adrfam": "IPv4", 00:11:13.408 "traddr": "10.0.0.1", 00:11:13.408 "trsvcid": "44332" 00:11:13.408 }, 00:11:13.408 "auth": { 00:11:13.408 "state": "completed", 00:11:13.408 "digest": "sha256", 00:11:13.408 "dhgroup": "null" 00:11:13.408 } 00:11:13.408 } 00:11:13.408 ]' 00:11:13.408 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.667 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.926 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:13.926 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.914 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.914 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.173 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.173 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.173 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.173 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.432 00:11:15.432 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.432 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.432 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.692 { 00:11:15.692 "cntlid": 5, 00:11:15.692 "qid": 0, 00:11:15.692 "state": "enabled", 00:11:15.692 "thread": "nvmf_tgt_poll_group_000", 00:11:15.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:15.692 "listen_address": { 00:11:15.692 "trtype": "TCP", 00:11:15.692 "adrfam": "IPv4", 00:11:15.692 "traddr": "10.0.0.3", 00:11:15.692 "trsvcid": "4420" 00:11:15.692 }, 00:11:15.692 "peer_address": { 00:11:15.692 "trtype": "TCP", 00:11:15.692 "adrfam": "IPv4", 00:11:15.692 "traddr": "10.0.0.1", 00:11:15.692 "trsvcid": "44366" 00:11:15.692 }, 00:11:15.692 "auth": { 00:11:15.692 "state": "completed", 00:11:15.692 "digest": "sha256", 00:11:15.692 "dhgroup": "null" 00:11:15.692 } 00:11:15.692 } 00:11:15.692 ]' 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.692 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.951 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:15.951 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.888 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.888 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:17.147 00:11:17.405 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.405 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.665 { 00:11:17.665 "cntlid": 7, 00:11:17.665 "qid": 0, 00:11:17.665 "state": "enabled", 00:11:17.665 "thread": "nvmf_tgt_poll_group_000", 00:11:17.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:17.665 "listen_address": { 00:11:17.665 "trtype": "TCP", 00:11:17.665 "adrfam": "IPv4", 00:11:17.665 "traddr": "10.0.0.3", 00:11:17.665 "trsvcid": "4420" 00:11:17.665 }, 00:11:17.665 "peer_address": { 00:11:17.665 "trtype": "TCP", 00:11:17.665 "adrfam": "IPv4", 00:11:17.665 "traddr": "10.0.0.1", 00:11:17.665 "trsvcid": "44398" 00:11:17.665 }, 00:11:17.665 "auth": { 00:11:17.665 "state": "completed", 00:11:17.665 "digest": "sha256", 00:11:17.665 "dhgroup": "null" 00:11:17.665 } 00:11:17.665 } 00:11:17.665 ]' 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.665 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.924 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:17.924 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.493 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.061 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.320 00:11:19.320 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.320 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.320 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.579 { 00:11:19.579 "cntlid": 9, 00:11:19.579 "qid": 0, 00:11:19.579 "state": "enabled", 00:11:19.579 "thread": "nvmf_tgt_poll_group_000", 00:11:19.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:19.579 "listen_address": { 00:11:19.579 "trtype": "TCP", 00:11:19.579 "adrfam": "IPv4", 00:11:19.579 "traddr": "10.0.0.3", 00:11:19.579 "trsvcid": "4420" 00:11:19.579 }, 00:11:19.579 "peer_address": { 00:11:19.579 "trtype": "TCP", 00:11:19.579 "adrfam": "IPv4", 00:11:19.579 "traddr": "10.0.0.1", 00:11:19.579 "trsvcid": "44430" 00:11:19.579 }, 00:11:19.579 "auth": { 00:11:19.579 "state": "completed", 00:11:19.579 "digest": "sha256", 00:11:19.579 "dhgroup": "ffdhe2048" 00:11:19.579 } 00:11:19.579 } 00:11:19.579 ]' 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.579 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.838 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:19.838 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.775 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.034 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.293 00:11:21.293 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.293 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.293 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.552 { 00:11:21.552 "cntlid": 11, 00:11:21.552 "qid": 0, 00:11:21.552 "state": "enabled", 00:11:21.552 "thread": "nvmf_tgt_poll_group_000", 00:11:21.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:21.552 "listen_address": { 00:11:21.552 "trtype": "TCP", 00:11:21.552 "adrfam": "IPv4", 00:11:21.552 "traddr": "10.0.0.3", 00:11:21.552 "trsvcid": "4420" 00:11:21.552 }, 00:11:21.552 "peer_address": { 00:11:21.552 "trtype": "TCP", 00:11:21.552 "adrfam": "IPv4", 00:11:21.552 "traddr": "10.0.0.1", 00:11:21.552 "trsvcid": "44462" 00:11:21.552 }, 00:11:21.553 "auth": { 00:11:21.553 "state": "completed", 00:11:21.553 "digest": "sha256", 00:11:21.553 "dhgroup": "ffdhe2048" 00:11:21.553 } 00:11:21.553 } 00:11:21.553 ]' 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.553 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.121 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:22.121 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.689 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.948 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.205 00:11:23.205 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.205 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.205 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.464 { 00:11:23.464 "cntlid": 13, 00:11:23.464 "qid": 0, 00:11:23.464 "state": "enabled", 00:11:23.464 "thread": "nvmf_tgt_poll_group_000", 00:11:23.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:23.464 "listen_address": { 00:11:23.464 "trtype": "TCP", 00:11:23.464 "adrfam": "IPv4", 00:11:23.464 "traddr": "10.0.0.3", 00:11:23.464 "trsvcid": "4420" 00:11:23.464 }, 00:11:23.464 "peer_address": { 00:11:23.464 "trtype": "TCP", 00:11:23.464 "adrfam": "IPv4", 00:11:23.464 "traddr": "10.0.0.1", 00:11:23.464 "trsvcid": "52466" 00:11:23.464 }, 00:11:23.464 "auth": { 00:11:23.464 "state": "completed", 00:11:23.464 "digest": "sha256", 00:11:23.464 "dhgroup": "ffdhe2048" 00:11:23.464 } 00:11:23.464 } 00:11:23.464 ]' 00:11:23.464 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.723 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.981 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:23.981 14:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:24.550 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.808 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.809 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.809 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.809 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.068 00:11:25.068 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.068 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.068 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.355 { 00:11:25.355 "cntlid": 15, 00:11:25.355 "qid": 0, 00:11:25.355 "state": "enabled", 00:11:25.355 "thread": "nvmf_tgt_poll_group_000", 00:11:25.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:25.355 "listen_address": { 00:11:25.355 "trtype": "TCP", 00:11:25.355 "adrfam": "IPv4", 00:11:25.355 "traddr": "10.0.0.3", 00:11:25.355 "trsvcid": "4420" 00:11:25.355 }, 00:11:25.355 "peer_address": { 00:11:25.355 "trtype": "TCP", 00:11:25.355 "adrfam": "IPv4", 00:11:25.355 "traddr": "10.0.0.1", 00:11:25.355 "trsvcid": "52486" 00:11:25.355 }, 00:11:25.355 "auth": { 00:11:25.355 "state": "completed", 00:11:25.355 "digest": "sha256", 00:11:25.355 "dhgroup": "ffdhe2048" 00:11:25.355 } 00:11:25.355 } 00:11:25.355 ]' 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:25.355 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.614 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.614 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.614 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.873 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:25.873 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.441 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.701 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.960 00:11:26.960 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.960 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.960 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.219 { 00:11:27.219 "cntlid": 17, 00:11:27.219 "qid": 0, 00:11:27.219 "state": "enabled", 00:11:27.219 "thread": "nvmf_tgt_poll_group_000", 00:11:27.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:27.219 "listen_address": { 00:11:27.219 "trtype": "TCP", 00:11:27.219 "adrfam": "IPv4", 00:11:27.219 "traddr": "10.0.0.3", 00:11:27.219 "trsvcid": "4420" 00:11:27.219 }, 00:11:27.219 "peer_address": { 00:11:27.219 "trtype": "TCP", 00:11:27.219 "adrfam": "IPv4", 00:11:27.219 "traddr": "10.0.0.1", 00:11:27.219 "trsvcid": "52510" 00:11:27.219 }, 00:11:27.219 "auth": { 00:11:27.219 "state": "completed", 00:11:27.219 "digest": "sha256", 00:11:27.219 "dhgroup": "ffdhe3072" 00:11:27.219 } 00:11:27.219 } 00:11:27.219 ]' 00:11:27.219 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.478 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.478 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.478 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.479 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.479 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.479 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.479 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.737 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:27.737 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.304 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.563 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.822 00:11:28.822 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.822 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.822 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.390 { 00:11:29.390 "cntlid": 19, 00:11:29.390 "qid": 0, 00:11:29.390 "state": "enabled", 00:11:29.390 "thread": "nvmf_tgt_poll_group_000", 00:11:29.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:29.390 "listen_address": { 00:11:29.390 "trtype": "TCP", 00:11:29.390 "adrfam": "IPv4", 00:11:29.390 "traddr": "10.0.0.3", 00:11:29.390 "trsvcid": "4420" 00:11:29.390 }, 00:11:29.390 "peer_address": { 00:11:29.390 "trtype": "TCP", 00:11:29.390 "adrfam": "IPv4", 00:11:29.390 "traddr": "10.0.0.1", 00:11:29.390 "trsvcid": "52526" 00:11:29.390 }, 00:11:29.390 "auth": { 00:11:29.390 "state": "completed", 00:11:29.390 "digest": "sha256", 00:11:29.390 "dhgroup": "ffdhe3072" 00:11:29.390 } 00:11:29.390 } 00:11:29.390 ]' 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.390 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.649 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:29.649 14:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:30.217 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.476 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.044 00:11:31.044 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.044 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.044 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.303 { 00:11:31.303 "cntlid": 21, 00:11:31.303 "qid": 0, 00:11:31.303 "state": "enabled", 00:11:31.303 "thread": "nvmf_tgt_poll_group_000", 00:11:31.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:31.303 "listen_address": { 00:11:31.303 "trtype": "TCP", 00:11:31.303 "adrfam": "IPv4", 00:11:31.303 "traddr": "10.0.0.3", 00:11:31.303 "trsvcid": "4420" 00:11:31.303 }, 00:11:31.303 "peer_address": { 00:11:31.303 "trtype": "TCP", 00:11:31.303 "adrfam": "IPv4", 00:11:31.303 "traddr": "10.0.0.1", 00:11:31.303 "trsvcid": "52562" 00:11:31.303 }, 00:11:31.303 "auth": { 00:11:31.303 "state": "completed", 00:11:31.303 "digest": "sha256", 00:11:31.303 "dhgroup": "ffdhe3072" 00:11:31.303 } 00:11:31.303 } 00:11:31.303 ]' 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.303 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.871 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:31.871 14:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:32.439 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.699 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.958 00:11:32.958 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.958 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.958 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.216 { 00:11:33.216 "cntlid": 23, 00:11:33.216 "qid": 0, 00:11:33.216 "state": "enabled", 00:11:33.216 "thread": "nvmf_tgt_poll_group_000", 00:11:33.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:33.216 "listen_address": { 00:11:33.216 "trtype": "TCP", 00:11:33.216 "adrfam": "IPv4", 00:11:33.216 "traddr": "10.0.0.3", 00:11:33.216 "trsvcid": "4420" 00:11:33.216 }, 00:11:33.216 "peer_address": { 00:11:33.216 "trtype": "TCP", 00:11:33.216 "adrfam": "IPv4", 00:11:33.216 "traddr": "10.0.0.1", 00:11:33.216 "trsvcid": "40100" 00:11:33.216 }, 00:11:33.216 "auth": { 00:11:33.216 "state": "completed", 00:11:33.216 "digest": "sha256", 00:11:33.216 "dhgroup": "ffdhe3072" 00:11:33.216 } 00:11:33.216 } 00:11:33.216 ]' 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.216 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.474 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:33.474 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.474 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.474 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.474 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.733 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:33.734 14:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:34.301 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.301 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:34.301 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.301 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.559 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.559 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.559 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.559 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.559 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.818 14:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.077 00:11:35.077 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.077 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.077 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.382 { 00:11:35.382 "cntlid": 25, 00:11:35.382 "qid": 0, 00:11:35.382 "state": "enabled", 00:11:35.382 "thread": "nvmf_tgt_poll_group_000", 00:11:35.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:35.382 "listen_address": { 00:11:35.382 "trtype": "TCP", 00:11:35.382 "adrfam": "IPv4", 00:11:35.382 "traddr": "10.0.0.3", 00:11:35.382 "trsvcid": "4420" 00:11:35.382 }, 00:11:35.382 "peer_address": { 00:11:35.382 "trtype": "TCP", 00:11:35.382 "adrfam": "IPv4", 00:11:35.382 "traddr": "10.0.0.1", 00:11:35.382 "trsvcid": "40130" 00:11:35.382 }, 00:11:35.382 "auth": { 00:11:35.382 "state": "completed", 00:11:35.382 "digest": "sha256", 00:11:35.382 "dhgroup": "ffdhe4096" 00:11:35.382 } 00:11:35.382 } 00:11:35.382 ]' 00:11:35.382 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.652 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.911 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:35.911 14:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:36.478 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.737 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.996 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:36.996 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.996 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.997 14:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.255 00:11:37.255 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.255 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.255 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.514 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.514 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.514 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.514 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.514 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.773 { 00:11:37.773 "cntlid": 27, 00:11:37.773 "qid": 0, 00:11:37.773 "state": "enabled", 00:11:37.773 "thread": "nvmf_tgt_poll_group_000", 00:11:37.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:37.773 "listen_address": { 00:11:37.773 "trtype": "TCP", 00:11:37.773 "adrfam": "IPv4", 00:11:37.773 "traddr": "10.0.0.3", 00:11:37.773 "trsvcid": "4420" 00:11:37.773 }, 00:11:37.773 "peer_address": { 00:11:37.773 "trtype": "TCP", 00:11:37.773 "adrfam": "IPv4", 00:11:37.773 "traddr": "10.0.0.1", 00:11:37.773 "trsvcid": "40154" 00:11:37.773 }, 00:11:37.773 "auth": { 00:11:37.773 "state": "completed", 00:11:37.773 "digest": "sha256", 00:11:37.773 "dhgroup": "ffdhe4096" 00:11:37.773 } 00:11:37.773 } 00:11:37.773 ]' 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.773 14:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.032 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:38.032 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.600 14:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.859 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.426 00:11:39.426 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.426 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.427 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.686 { 00:11:39.686 "cntlid": 29, 00:11:39.686 "qid": 0, 00:11:39.686 "state": "enabled", 00:11:39.686 "thread": "nvmf_tgt_poll_group_000", 00:11:39.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:39.686 "listen_address": { 00:11:39.686 "trtype": "TCP", 00:11:39.686 "adrfam": "IPv4", 00:11:39.686 "traddr": "10.0.0.3", 00:11:39.686 "trsvcid": "4420" 00:11:39.686 }, 00:11:39.686 "peer_address": { 00:11:39.686 "trtype": "TCP", 00:11:39.686 "adrfam": "IPv4", 00:11:39.686 "traddr": "10.0.0.1", 00:11:39.686 "trsvcid": "40182" 00:11:39.686 }, 00:11:39.686 "auth": { 00:11:39.686 "state": "completed", 00:11:39.686 "digest": "sha256", 00:11:39.686 "dhgroup": "ffdhe4096" 00:11:39.686 } 00:11:39.686 } 00:11:39.686 ]' 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.686 14:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.945 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:39.945 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:40.882 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.882 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.450 00:11:41.450 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.450 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.450 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.709 { 00:11:41.709 "cntlid": 31, 00:11:41.709 "qid": 0, 00:11:41.709 "state": "enabled", 00:11:41.709 "thread": "nvmf_tgt_poll_group_000", 00:11:41.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:41.709 "listen_address": { 00:11:41.709 "trtype": "TCP", 00:11:41.709 "adrfam": "IPv4", 00:11:41.709 "traddr": "10.0.0.3", 00:11:41.709 "trsvcid": "4420" 00:11:41.709 }, 00:11:41.709 "peer_address": { 00:11:41.709 "trtype": "TCP", 00:11:41.709 "adrfam": "IPv4", 00:11:41.709 "traddr": "10.0.0.1", 00:11:41.709 "trsvcid": "40206" 00:11:41.709 }, 00:11:41.709 "auth": { 00:11:41.709 "state": "completed", 00:11:41.709 "digest": "sha256", 00:11:41.709 "dhgroup": "ffdhe4096" 00:11:41.709 } 00:11:41.709 } 00:11:41.709 ]' 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.709 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.277 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:42.277 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.845 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.103 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.669 00:11:43.669 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.669 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.669 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.928 { 00:11:43.928 "cntlid": 33, 00:11:43.928 "qid": 0, 00:11:43.928 "state": "enabled", 00:11:43.928 "thread": "nvmf_tgt_poll_group_000", 00:11:43.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:43.928 "listen_address": { 00:11:43.928 "trtype": "TCP", 00:11:43.928 "adrfam": "IPv4", 00:11:43.928 "traddr": "10.0.0.3", 00:11:43.928 "trsvcid": "4420" 00:11:43.928 }, 00:11:43.928 "peer_address": { 00:11:43.928 "trtype": "TCP", 00:11:43.928 "adrfam": "IPv4", 00:11:43.928 "traddr": "10.0.0.1", 00:11:43.928 "trsvcid": "52240" 00:11:43.928 }, 00:11:43.928 "auth": { 00:11:43.928 "state": "completed", 00:11:43.928 "digest": "sha256", 00:11:43.928 "dhgroup": "ffdhe6144" 00:11:43.928 } 00:11:43.928 } 00:11:43.928 ]' 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.928 14:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.928 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.928 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.928 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.928 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.928 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.187 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:44.187 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:45.122 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.122 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.123 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.123 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.123 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.123 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.702 00:11:45.703 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.703 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.703 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.974 { 00:11:45.974 "cntlid": 35, 00:11:45.974 "qid": 0, 00:11:45.974 "state": "enabled", 00:11:45.974 "thread": "nvmf_tgt_poll_group_000", 00:11:45.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:45.974 "listen_address": { 00:11:45.974 "trtype": "TCP", 00:11:45.974 "adrfam": "IPv4", 00:11:45.974 "traddr": "10.0.0.3", 00:11:45.974 "trsvcid": "4420" 00:11:45.974 }, 00:11:45.974 "peer_address": { 00:11:45.974 "trtype": "TCP", 00:11:45.974 "adrfam": "IPv4", 00:11:45.974 "traddr": "10.0.0.1", 00:11:45.974 "trsvcid": "52286" 00:11:45.974 }, 00:11:45.974 "auth": { 00:11:45.974 "state": "completed", 00:11:45.974 "digest": "sha256", 00:11:45.974 "dhgroup": "ffdhe6144" 00:11:45.974 } 00:11:45.974 } 00:11:45.974 ]' 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.974 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.232 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.232 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.232 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.491 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:46.491 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.058 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.317 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:47.317 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.317 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.317 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.317 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.318 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.886 00:11:47.886 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.886 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.886 14:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.144 { 00:11:48.144 "cntlid": 37, 00:11:48.144 "qid": 0, 00:11:48.144 "state": "enabled", 00:11:48.144 "thread": "nvmf_tgt_poll_group_000", 00:11:48.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:48.144 "listen_address": { 00:11:48.144 "trtype": "TCP", 00:11:48.144 "adrfam": "IPv4", 00:11:48.144 "traddr": "10.0.0.3", 00:11:48.144 "trsvcid": "4420" 00:11:48.144 }, 00:11:48.144 "peer_address": { 00:11:48.144 "trtype": "TCP", 00:11:48.144 "adrfam": "IPv4", 00:11:48.144 "traddr": "10.0.0.1", 00:11:48.144 "trsvcid": "52326" 00:11:48.144 }, 00:11:48.144 "auth": { 00:11:48.144 "state": "completed", 00:11:48.144 "digest": "sha256", 00:11:48.144 "dhgroup": "ffdhe6144" 00:11:48.144 } 00:11:48.144 } 00:11:48.144 ]' 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.144 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.145 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.403 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:48.404 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.341 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.909 00:11:49.909 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.909 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.909 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.168 { 00:11:50.168 "cntlid": 39, 00:11:50.168 "qid": 0, 00:11:50.168 "state": "enabled", 00:11:50.168 "thread": "nvmf_tgt_poll_group_000", 00:11:50.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:50.168 "listen_address": { 00:11:50.168 "trtype": "TCP", 00:11:50.168 "adrfam": "IPv4", 00:11:50.168 "traddr": "10.0.0.3", 00:11:50.168 "trsvcid": "4420" 00:11:50.168 }, 00:11:50.168 "peer_address": { 00:11:50.168 "trtype": "TCP", 00:11:50.168 "adrfam": "IPv4", 00:11:50.168 "traddr": "10.0.0.1", 00:11:50.168 "trsvcid": "52362" 00:11:50.168 }, 00:11:50.168 "auth": { 00:11:50.168 "state": "completed", 00:11:50.168 "digest": "sha256", 00:11:50.168 "dhgroup": "ffdhe6144" 00:11:50.168 } 00:11:50.168 } 00:11:50.168 ]' 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.168 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.427 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.427 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.427 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.686 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:50.686 14:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.254 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.513 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.450 00:11:52.450 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.450 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.450 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.710 { 00:11:52.710 "cntlid": 41, 00:11:52.710 "qid": 0, 00:11:52.710 "state": "enabled", 00:11:52.710 "thread": "nvmf_tgt_poll_group_000", 00:11:52.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:52.710 "listen_address": { 00:11:52.710 "trtype": "TCP", 00:11:52.710 "adrfam": "IPv4", 00:11:52.710 "traddr": "10.0.0.3", 00:11:52.710 "trsvcid": "4420" 00:11:52.710 }, 00:11:52.710 "peer_address": { 00:11:52.710 "trtype": "TCP", 00:11:52.710 "adrfam": "IPv4", 00:11:52.710 "traddr": "10.0.0.1", 00:11:52.710 "trsvcid": "52386" 00:11:52.710 }, 00:11:52.710 "auth": { 00:11:52.710 "state": "completed", 00:11:52.710 "digest": "sha256", 00:11:52.710 "dhgroup": "ffdhe8192" 00:11:52.710 } 00:11:52.710 } 00:11:52.710 ]' 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.710 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.969 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:52.969 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.904 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.904 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.841 00:11:54.841 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.841 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.841 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.100 { 00:11:55.100 "cntlid": 43, 00:11:55.100 "qid": 0, 00:11:55.100 "state": "enabled", 00:11:55.100 "thread": "nvmf_tgt_poll_group_000", 00:11:55.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:55.100 "listen_address": { 00:11:55.100 "trtype": "TCP", 00:11:55.100 "adrfam": "IPv4", 00:11:55.100 "traddr": "10.0.0.3", 00:11:55.100 "trsvcid": "4420" 00:11:55.100 }, 00:11:55.100 "peer_address": { 00:11:55.100 "trtype": "TCP", 00:11:55.100 "adrfam": "IPv4", 00:11:55.100 "traddr": "10.0.0.1", 00:11:55.100 "trsvcid": "45126" 00:11:55.100 }, 00:11:55.100 "auth": { 00:11:55.100 "state": "completed", 00:11:55.100 "digest": "sha256", 00:11:55.100 "dhgroup": "ffdhe8192" 00:11:55.100 } 00:11:55.100 } 00:11:55.100 ]' 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.100 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.359 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:55.359 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.310 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.877 00:11:57.136 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.136 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.136 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.394 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.394 { 00:11:57.394 "cntlid": 45, 00:11:57.395 "qid": 0, 00:11:57.395 "state": "enabled", 00:11:57.395 "thread": "nvmf_tgt_poll_group_000", 00:11:57.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:57.395 "listen_address": { 00:11:57.395 "trtype": "TCP", 00:11:57.395 "adrfam": "IPv4", 00:11:57.395 "traddr": "10.0.0.3", 00:11:57.395 "trsvcid": "4420" 00:11:57.395 }, 00:11:57.395 "peer_address": { 00:11:57.395 "trtype": "TCP", 00:11:57.395 "adrfam": "IPv4", 00:11:57.395 "traddr": "10.0.0.1", 00:11:57.395 "trsvcid": "45154" 00:11:57.395 }, 00:11:57.395 "auth": { 00:11:57.395 "state": "completed", 00:11:57.395 "digest": "sha256", 00:11:57.395 "dhgroup": "ffdhe8192" 00:11:57.395 } 00:11:57.395 } 00:11:57.395 ]' 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.395 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.962 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:57.963 14:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.529 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.788 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.355 00:11:59.355 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.355 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.355 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.614 { 00:11:59.614 "cntlid": 47, 00:11:59.614 "qid": 0, 00:11:59.614 "state": "enabled", 00:11:59.614 "thread": "nvmf_tgt_poll_group_000", 00:11:59.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:11:59.614 "listen_address": { 00:11:59.614 "trtype": "TCP", 00:11:59.614 "adrfam": "IPv4", 00:11:59.614 "traddr": "10.0.0.3", 00:11:59.614 "trsvcid": "4420" 00:11:59.614 }, 00:11:59.614 "peer_address": { 00:11:59.614 "trtype": "TCP", 00:11:59.614 "adrfam": "IPv4", 00:11:59.614 "traddr": "10.0.0.1", 00:11:59.614 "trsvcid": "45188" 00:11:59.614 }, 00:11:59.614 "auth": { 00:11:59.614 "state": "completed", 00:11:59.614 "digest": "sha256", 00:11:59.614 "dhgroup": "ffdhe8192" 00:11:59.614 } 00:11:59.614 } 00:11:59.614 ]' 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.614 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.873 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.873 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.873 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.873 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.873 14:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.132 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:00.132 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.699 14:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.958 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.525 00:12:01.525 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.525 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.525 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.783 { 00:12:01.783 "cntlid": 49, 00:12:01.783 "qid": 0, 00:12:01.783 "state": "enabled", 00:12:01.783 "thread": "nvmf_tgt_poll_group_000", 00:12:01.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:01.783 "listen_address": { 00:12:01.783 "trtype": "TCP", 00:12:01.783 "adrfam": "IPv4", 00:12:01.783 "traddr": "10.0.0.3", 00:12:01.783 "trsvcid": "4420" 00:12:01.783 }, 00:12:01.783 "peer_address": { 00:12:01.783 "trtype": "TCP", 00:12:01.783 "adrfam": "IPv4", 00:12:01.783 "traddr": "10.0.0.1", 00:12:01.783 "trsvcid": "45224" 00:12:01.783 }, 00:12:01.783 "auth": { 00:12:01.783 "state": "completed", 00:12:01.783 "digest": "sha384", 00:12:01.783 "dhgroup": "null" 00:12:01.783 } 00:12:01.783 } 00:12:01.783 ]' 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.783 14:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.040 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:02.040 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.978 14:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.978 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.237 00:12:03.496 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.496 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.496 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.755 { 00:12:03.755 "cntlid": 51, 00:12:03.755 "qid": 0, 00:12:03.755 "state": "enabled", 00:12:03.755 "thread": "nvmf_tgt_poll_group_000", 00:12:03.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:03.755 "listen_address": { 00:12:03.755 "trtype": "TCP", 00:12:03.755 "adrfam": "IPv4", 00:12:03.755 "traddr": "10.0.0.3", 00:12:03.755 "trsvcid": "4420" 00:12:03.755 }, 00:12:03.755 "peer_address": { 00:12:03.755 "trtype": "TCP", 00:12:03.755 "adrfam": "IPv4", 00:12:03.755 "traddr": "10.0.0.1", 00:12:03.755 "trsvcid": "40568" 00:12:03.755 }, 00:12:03.755 "auth": { 00:12:03.755 "state": "completed", 00:12:03.755 "digest": "sha384", 00:12:03.755 "dhgroup": "null" 00:12:03.755 } 00:12:03.755 } 00:12:03.755 ]' 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.755 14:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.322 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:04.322 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.889 14:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.414 00:12:05.414 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.414 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.414 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.673 { 00:12:05.673 "cntlid": 53, 00:12:05.673 "qid": 0, 00:12:05.673 "state": "enabled", 00:12:05.673 "thread": "nvmf_tgt_poll_group_000", 00:12:05.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:05.673 "listen_address": { 00:12:05.673 "trtype": "TCP", 00:12:05.673 "adrfam": "IPv4", 00:12:05.673 "traddr": "10.0.0.3", 00:12:05.673 "trsvcid": "4420" 00:12:05.673 }, 00:12:05.673 "peer_address": { 00:12:05.673 "trtype": "TCP", 00:12:05.673 "adrfam": "IPv4", 00:12:05.673 "traddr": "10.0.0.1", 00:12:05.673 "trsvcid": "40598" 00:12:05.673 }, 00:12:05.673 "auth": { 00:12:05.673 "state": "completed", 00:12:05.673 "digest": "sha384", 00:12:05.673 "dhgroup": "null" 00:12:05.673 } 00:12:05.673 } 00:12:05.673 ]' 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.673 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.932 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.932 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.932 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.932 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.932 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.190 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:06.191 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.757 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.017 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.275 00:12:07.533 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.533 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.533 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.792 { 00:12:07.792 "cntlid": 55, 00:12:07.792 "qid": 0, 00:12:07.792 "state": "enabled", 00:12:07.792 "thread": "nvmf_tgt_poll_group_000", 00:12:07.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:07.792 "listen_address": { 00:12:07.792 "trtype": "TCP", 00:12:07.792 "adrfam": "IPv4", 00:12:07.792 "traddr": "10.0.0.3", 00:12:07.792 "trsvcid": "4420" 00:12:07.792 }, 00:12:07.792 "peer_address": { 00:12:07.792 "trtype": "TCP", 00:12:07.792 "adrfam": "IPv4", 00:12:07.792 "traddr": "10.0.0.1", 00:12:07.792 "trsvcid": "40618" 00:12:07.792 }, 00:12:07.792 "auth": { 00:12:07.792 "state": "completed", 00:12:07.792 "digest": "sha384", 00:12:07.792 "dhgroup": "null" 00:12:07.792 } 00:12:07.792 } 00:12:07.792 ]' 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:07.792 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.793 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.793 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.793 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.058 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:08.058 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.002 14:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.261 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.520 00:12:09.520 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.520 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.520 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.778 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.778 { 00:12:09.778 "cntlid": 57, 00:12:09.778 "qid": 0, 00:12:09.778 "state": "enabled", 00:12:09.778 "thread": "nvmf_tgt_poll_group_000", 00:12:09.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:09.779 "listen_address": { 00:12:09.779 "trtype": "TCP", 00:12:09.779 "adrfam": "IPv4", 00:12:09.779 "traddr": "10.0.0.3", 00:12:09.779 "trsvcid": "4420" 00:12:09.779 }, 00:12:09.779 "peer_address": { 00:12:09.779 "trtype": "TCP", 00:12:09.779 "adrfam": "IPv4", 00:12:09.779 "traddr": "10.0.0.1", 00:12:09.779 "trsvcid": "40654" 00:12:09.779 }, 00:12:09.779 "auth": { 00:12:09.779 "state": "completed", 00:12:09.779 "digest": "sha384", 00:12:09.779 "dhgroup": "ffdhe2048" 00:12:09.779 } 00:12:09.779 } 00:12:09.779 ]' 00:12:09.779 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.779 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.779 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.037 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.037 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.037 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.037 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.037 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.296 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:10.296 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:10.863 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:10.864 14:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.122 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.381 00:12:11.381 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.381 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.381 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.948 { 00:12:11.948 "cntlid": 59, 00:12:11.948 "qid": 0, 00:12:11.948 "state": "enabled", 00:12:11.948 "thread": "nvmf_tgt_poll_group_000", 00:12:11.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:11.948 "listen_address": { 00:12:11.948 "trtype": "TCP", 00:12:11.948 "adrfam": "IPv4", 00:12:11.948 "traddr": "10.0.0.3", 00:12:11.948 "trsvcid": "4420" 00:12:11.948 }, 00:12:11.948 "peer_address": { 00:12:11.948 "trtype": "TCP", 00:12:11.948 "adrfam": "IPv4", 00:12:11.948 "traddr": "10.0.0.1", 00:12:11.948 "trsvcid": "40680" 00:12:11.948 }, 00:12:11.948 "auth": { 00:12:11.948 "state": "completed", 00:12:11.948 "digest": "sha384", 00:12:11.948 "dhgroup": "ffdhe2048" 00:12:11.948 } 00:12:11.948 } 00:12:11.948 ]' 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:11.948 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.948 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.948 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.948 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.207 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:12.207 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.774 14:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.341 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.599 00:12:13.599 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.599 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.599 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.857 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.857 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.857 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.857 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.858 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.858 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.858 { 00:12:13.858 "cntlid": 61, 00:12:13.858 "qid": 0, 00:12:13.858 "state": "enabled", 00:12:13.858 "thread": "nvmf_tgt_poll_group_000", 00:12:13.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:13.858 "listen_address": { 00:12:13.858 "trtype": "TCP", 00:12:13.858 "adrfam": "IPv4", 00:12:13.858 "traddr": "10.0.0.3", 00:12:13.858 "trsvcid": "4420" 00:12:13.858 }, 00:12:13.858 "peer_address": { 00:12:13.858 "trtype": "TCP", 00:12:13.858 "adrfam": "IPv4", 00:12:13.858 "traddr": "10.0.0.1", 00:12:13.858 "trsvcid": "42578" 00:12:13.858 }, 00:12:13.858 "auth": { 00:12:13.858 "state": "completed", 00:12:13.858 "digest": "sha384", 00:12:13.858 "dhgroup": "ffdhe2048" 00:12:13.858 } 00:12:13.858 } 00:12:13.858 ]' 00:12:13.858 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.858 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.858 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.858 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.858 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.116 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.116 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.116 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.375 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:14.375 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:14.942 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.942 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.200 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:15.200 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.200 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.200 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:15.200 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.201 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.459 00:12:15.459 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.459 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.459 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.026 { 00:12:16.026 "cntlid": 63, 00:12:16.026 "qid": 0, 00:12:16.026 "state": "enabled", 00:12:16.026 "thread": "nvmf_tgt_poll_group_000", 00:12:16.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:16.026 "listen_address": { 00:12:16.026 "trtype": "TCP", 00:12:16.026 "adrfam": "IPv4", 00:12:16.026 "traddr": "10.0.0.3", 00:12:16.026 "trsvcid": "4420" 00:12:16.026 }, 00:12:16.026 "peer_address": { 00:12:16.026 "trtype": "TCP", 00:12:16.026 "adrfam": "IPv4", 00:12:16.026 "traddr": "10.0.0.1", 00:12:16.026 "trsvcid": "42606" 00:12:16.026 }, 00:12:16.026 "auth": { 00:12:16.026 "state": "completed", 00:12:16.026 "digest": "sha384", 00:12:16.026 "dhgroup": "ffdhe2048" 00:12:16.026 } 00:12:16.026 } 00:12:16.026 ]' 00:12:16.026 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.026 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.285 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:16.285 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.221 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.788 00:12:17.788 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.788 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.788 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.048 { 00:12:18.048 "cntlid": 65, 00:12:18.048 "qid": 0, 00:12:18.048 "state": "enabled", 00:12:18.048 "thread": "nvmf_tgt_poll_group_000", 00:12:18.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:18.048 "listen_address": { 00:12:18.048 "trtype": "TCP", 00:12:18.048 "adrfam": "IPv4", 00:12:18.048 "traddr": "10.0.0.3", 00:12:18.048 "trsvcid": "4420" 00:12:18.048 }, 00:12:18.048 "peer_address": { 00:12:18.048 "trtype": "TCP", 00:12:18.048 "adrfam": "IPv4", 00:12:18.048 "traddr": "10.0.0.1", 00:12:18.048 "trsvcid": "42644" 00:12:18.048 }, 00:12:18.048 "auth": { 00:12:18.048 "state": "completed", 00:12:18.048 "digest": "sha384", 00:12:18.048 "dhgroup": "ffdhe3072" 00:12:18.048 } 00:12:18.048 } 00:12:18.048 ]' 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.048 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.307 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:18.307 14:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.262 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.520 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.778 00:12:19.778 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.778 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.778 14:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.036 { 00:12:20.036 "cntlid": 67, 00:12:20.036 "qid": 0, 00:12:20.036 "state": "enabled", 00:12:20.036 "thread": "nvmf_tgt_poll_group_000", 00:12:20.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:20.036 "listen_address": { 00:12:20.036 "trtype": "TCP", 00:12:20.036 "adrfam": "IPv4", 00:12:20.036 "traddr": "10.0.0.3", 00:12:20.036 "trsvcid": "4420" 00:12:20.036 }, 00:12:20.036 "peer_address": { 00:12:20.036 "trtype": "TCP", 00:12:20.036 "adrfam": "IPv4", 00:12:20.036 "traddr": "10.0.0.1", 00:12:20.036 "trsvcid": "42668" 00:12:20.036 }, 00:12:20.036 "auth": { 00:12:20.036 "state": "completed", 00:12:20.036 "digest": "sha384", 00:12:20.036 "dhgroup": "ffdhe3072" 00:12:20.036 } 00:12:20.036 } 00:12:20.036 ]' 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.036 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.294 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.294 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.294 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.294 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.294 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.553 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:20.553 14:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.120 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.379 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.380 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.380 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.380 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.947 00:12:21.947 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.947 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.947 14:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.206 { 00:12:22.206 "cntlid": 69, 00:12:22.206 "qid": 0, 00:12:22.206 "state": "enabled", 00:12:22.206 "thread": "nvmf_tgt_poll_group_000", 00:12:22.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:22.206 "listen_address": { 00:12:22.206 "trtype": "TCP", 00:12:22.206 "adrfam": "IPv4", 00:12:22.206 "traddr": "10.0.0.3", 00:12:22.206 "trsvcid": "4420" 00:12:22.206 }, 00:12:22.206 "peer_address": { 00:12:22.206 "trtype": "TCP", 00:12:22.206 "adrfam": "IPv4", 00:12:22.206 "traddr": "10.0.0.1", 00:12:22.206 "trsvcid": "42682" 00:12:22.206 }, 00:12:22.206 "auth": { 00:12:22.206 "state": "completed", 00:12:22.206 "digest": "sha384", 00:12:22.206 "dhgroup": "ffdhe3072" 00:12:22.206 } 00:12:22.206 } 00:12:22.206 ]' 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.206 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.774 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:22.774 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.358 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.616 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.875 00:12:23.875 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.875 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.875 14:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.134 { 00:12:24.134 "cntlid": 71, 00:12:24.134 "qid": 0, 00:12:24.134 "state": "enabled", 00:12:24.134 "thread": "nvmf_tgt_poll_group_000", 00:12:24.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:24.134 "listen_address": { 00:12:24.134 "trtype": "TCP", 00:12:24.134 "adrfam": "IPv4", 00:12:24.134 "traddr": "10.0.0.3", 00:12:24.134 "trsvcid": "4420" 00:12:24.134 }, 00:12:24.134 "peer_address": { 00:12:24.134 "trtype": "TCP", 00:12:24.134 "adrfam": "IPv4", 00:12:24.134 "traddr": "10.0.0.1", 00:12:24.134 "trsvcid": "35262" 00:12:24.134 }, 00:12:24.134 "auth": { 00:12:24.134 "state": "completed", 00:12:24.134 "digest": "sha384", 00:12:24.134 "dhgroup": "ffdhe3072" 00:12:24.134 } 00:12:24.134 } 00:12:24.134 ]' 00:12:24.134 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.393 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.652 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:24.652 14:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:25.219 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.220 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.787 14:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.046 00:12:26.046 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.046 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.046 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.305 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.305 { 00:12:26.305 "cntlid": 73, 00:12:26.305 "qid": 0, 00:12:26.305 "state": "enabled", 00:12:26.305 "thread": "nvmf_tgt_poll_group_000", 00:12:26.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:26.306 "listen_address": { 00:12:26.306 "trtype": "TCP", 00:12:26.306 "adrfam": "IPv4", 00:12:26.306 "traddr": "10.0.0.3", 00:12:26.306 "trsvcid": "4420" 00:12:26.306 }, 00:12:26.306 "peer_address": { 00:12:26.306 "trtype": "TCP", 00:12:26.306 "adrfam": "IPv4", 00:12:26.306 "traddr": "10.0.0.1", 00:12:26.306 "trsvcid": "35290" 00:12:26.306 }, 00:12:26.306 "auth": { 00:12:26.306 "state": "completed", 00:12:26.306 "digest": "sha384", 00:12:26.306 "dhgroup": "ffdhe4096" 00:12:26.306 } 00:12:26.306 } 00:12:26.306 ]' 00:12:26.306 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.306 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.306 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.306 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:26.306 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.565 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.565 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.565 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.823 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:26.823 14:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.391 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.651 14:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.217 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.217 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.476 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.476 { 00:12:28.476 "cntlid": 75, 00:12:28.476 "qid": 0, 00:12:28.477 "state": "enabled", 00:12:28.477 "thread": "nvmf_tgt_poll_group_000", 00:12:28.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:28.477 "listen_address": { 00:12:28.477 "trtype": "TCP", 00:12:28.477 "adrfam": "IPv4", 00:12:28.477 "traddr": "10.0.0.3", 00:12:28.477 "trsvcid": "4420" 00:12:28.477 }, 00:12:28.477 "peer_address": { 00:12:28.477 "trtype": "TCP", 00:12:28.477 "adrfam": "IPv4", 00:12:28.477 "traddr": "10.0.0.1", 00:12:28.477 "trsvcid": "35304" 00:12:28.477 }, 00:12:28.477 "auth": { 00:12:28.477 "state": "completed", 00:12:28.477 "digest": "sha384", 00:12:28.477 "dhgroup": "ffdhe4096" 00:12:28.477 } 00:12:28.477 } 00:12:28.477 ]' 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.477 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.045 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:29.045 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.613 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.873 14:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.131 00:12:30.131 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.132 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.132 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.700 { 00:12:30.700 "cntlid": 77, 00:12:30.700 "qid": 0, 00:12:30.700 "state": "enabled", 00:12:30.700 "thread": "nvmf_tgt_poll_group_000", 00:12:30.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:30.700 "listen_address": { 00:12:30.700 "trtype": "TCP", 00:12:30.700 "adrfam": "IPv4", 00:12:30.700 "traddr": "10.0.0.3", 00:12:30.700 "trsvcid": "4420" 00:12:30.700 }, 00:12:30.700 "peer_address": { 00:12:30.700 "trtype": "TCP", 00:12:30.700 "adrfam": "IPv4", 00:12:30.700 "traddr": "10.0.0.1", 00:12:30.700 "trsvcid": "35342" 00:12:30.700 }, 00:12:30.700 "auth": { 00:12:30.700 "state": "completed", 00:12:30.700 "digest": "sha384", 00:12:30.700 "dhgroup": "ffdhe4096" 00:12:30.700 } 00:12:30.700 } 00:12:30.700 ]' 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.700 14:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.959 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:30.959 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:31.899 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.899 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:31.899 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.899 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.900 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.900 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.900 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:31.900 14:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.159 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.418 00:12:32.677 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.677 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.677 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.936 { 00:12:32.936 "cntlid": 79, 00:12:32.936 "qid": 0, 00:12:32.936 "state": "enabled", 00:12:32.936 "thread": "nvmf_tgt_poll_group_000", 00:12:32.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:32.936 "listen_address": { 00:12:32.936 "trtype": "TCP", 00:12:32.936 "adrfam": "IPv4", 00:12:32.936 "traddr": "10.0.0.3", 00:12:32.936 "trsvcid": "4420" 00:12:32.936 }, 00:12:32.936 "peer_address": { 00:12:32.936 "trtype": "TCP", 00:12:32.936 "adrfam": "IPv4", 00:12:32.936 "traddr": "10.0.0.1", 00:12:32.936 "trsvcid": "45234" 00:12:32.936 }, 00:12:32.936 "auth": { 00:12:32.936 "state": "completed", 00:12:32.936 "digest": "sha384", 00:12:32.936 "dhgroup": "ffdhe4096" 00:12:32.936 } 00:12:32.936 } 00:12:32.936 ]' 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.936 14:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.936 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.936 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.936 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.936 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.936 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.502 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:33.502 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.070 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.329 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.897 00:12:34.897 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.897 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.897 14:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.156 { 00:12:35.156 "cntlid": 81, 00:12:35.156 "qid": 0, 00:12:35.156 "state": "enabled", 00:12:35.156 "thread": "nvmf_tgt_poll_group_000", 00:12:35.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:35.156 "listen_address": { 00:12:35.156 "trtype": "TCP", 00:12:35.156 "adrfam": "IPv4", 00:12:35.156 "traddr": "10.0.0.3", 00:12:35.156 "trsvcid": "4420" 00:12:35.156 }, 00:12:35.156 "peer_address": { 00:12:35.156 "trtype": "TCP", 00:12:35.156 "adrfam": "IPv4", 00:12:35.156 "traddr": "10.0.0.1", 00:12:35.156 "trsvcid": "45266" 00:12:35.156 }, 00:12:35.156 "auth": { 00:12:35.156 "state": "completed", 00:12:35.156 "digest": "sha384", 00:12:35.156 "dhgroup": "ffdhe6144" 00:12:35.156 } 00:12:35.156 } 00:12:35.156 ]' 00:12:35.156 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.415 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.416 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.674 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:35.674 14:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.242 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.839 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.098 00:12:37.098 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.098 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.098 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.358 { 00:12:37.358 "cntlid": 83, 00:12:37.358 "qid": 0, 00:12:37.358 "state": "enabled", 00:12:37.358 "thread": "nvmf_tgt_poll_group_000", 00:12:37.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:37.358 "listen_address": { 00:12:37.358 "trtype": "TCP", 00:12:37.358 "adrfam": "IPv4", 00:12:37.358 "traddr": "10.0.0.3", 00:12:37.358 "trsvcid": "4420" 00:12:37.358 }, 00:12:37.358 "peer_address": { 00:12:37.358 "trtype": "TCP", 00:12:37.358 "adrfam": "IPv4", 00:12:37.358 "traddr": "10.0.0.1", 00:12:37.358 "trsvcid": "45290" 00:12:37.358 }, 00:12:37.358 "auth": { 00:12:37.358 "state": "completed", 00:12:37.358 "digest": "sha384", 00:12:37.358 "dhgroup": "ffdhe6144" 00:12:37.358 } 00:12:37.358 } 00:12:37.358 ]' 00:12:37.358 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.617 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.875 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:37.875 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.443 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.702 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.271 00:12:39.271 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.271 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.271 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.530 { 00:12:39.530 "cntlid": 85, 00:12:39.530 "qid": 0, 00:12:39.530 "state": "enabled", 00:12:39.530 "thread": "nvmf_tgt_poll_group_000", 00:12:39.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:39.530 "listen_address": { 00:12:39.530 "trtype": "TCP", 00:12:39.530 "adrfam": "IPv4", 00:12:39.530 "traddr": "10.0.0.3", 00:12:39.530 "trsvcid": "4420" 00:12:39.530 }, 00:12:39.530 "peer_address": { 00:12:39.530 "trtype": "TCP", 00:12:39.530 "adrfam": "IPv4", 00:12:39.530 "traddr": "10.0.0.1", 00:12:39.530 "trsvcid": "45318" 00:12:39.530 }, 00:12:39.530 "auth": { 00:12:39.530 "state": "completed", 00:12:39.530 "digest": "sha384", 00:12:39.530 "dhgroup": "ffdhe6144" 00:12:39.530 } 00:12:39.530 } 00:12:39.530 ]' 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.530 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.790 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.790 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.790 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.790 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.790 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.049 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:40.049 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.616 14:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.875 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.134 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.134 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.134 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.134 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.393 00:12:41.652 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.652 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.652 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.912 { 00:12:41.912 "cntlid": 87, 00:12:41.912 "qid": 0, 00:12:41.912 "state": "enabled", 00:12:41.912 "thread": "nvmf_tgt_poll_group_000", 00:12:41.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:41.912 "listen_address": { 00:12:41.912 "trtype": "TCP", 00:12:41.912 "adrfam": "IPv4", 00:12:41.912 "traddr": "10.0.0.3", 00:12:41.912 "trsvcid": "4420" 00:12:41.912 }, 00:12:41.912 "peer_address": { 00:12:41.912 "trtype": "TCP", 00:12:41.912 "adrfam": "IPv4", 00:12:41.912 "traddr": "10.0.0.1", 00:12:41.912 "trsvcid": "45342" 00:12:41.912 }, 00:12:41.912 "auth": { 00:12:41.912 "state": "completed", 00:12:41.912 "digest": "sha384", 00:12:41.912 "dhgroup": "ffdhe6144" 00:12:41.912 } 00:12:41.912 } 00:12:41.912 ]' 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.912 14:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.912 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.912 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.912 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.171 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:42.171 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:43.118 14:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.119 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.383 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.383 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.383 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.383 14:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.951 00:12:43.951 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.951 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.951 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.210 { 00:12:44.210 "cntlid": 89, 00:12:44.210 "qid": 0, 00:12:44.210 "state": "enabled", 00:12:44.210 "thread": "nvmf_tgt_poll_group_000", 00:12:44.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:44.210 "listen_address": { 00:12:44.210 "trtype": "TCP", 00:12:44.210 "adrfam": "IPv4", 00:12:44.210 "traddr": "10.0.0.3", 00:12:44.210 "trsvcid": "4420" 00:12:44.210 }, 00:12:44.210 "peer_address": { 00:12:44.210 "trtype": "TCP", 00:12:44.210 "adrfam": "IPv4", 00:12:44.210 "traddr": "10.0.0.1", 00:12:44.210 "trsvcid": "35512" 00:12:44.210 }, 00:12:44.210 "auth": { 00:12:44.210 "state": "completed", 00:12:44.210 "digest": "sha384", 00:12:44.210 "dhgroup": "ffdhe8192" 00:12:44.210 } 00:12:44.210 } 00:12:44.210 ]' 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:44.210 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.469 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.469 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.469 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.729 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:44.729 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.298 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.557 14:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.126 00:12:46.126 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.126 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.126 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.385 { 00:12:46.385 "cntlid": 91, 00:12:46.385 "qid": 0, 00:12:46.385 "state": "enabled", 00:12:46.385 "thread": "nvmf_tgt_poll_group_000", 00:12:46.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:46.385 "listen_address": { 00:12:46.385 "trtype": "TCP", 00:12:46.385 "adrfam": "IPv4", 00:12:46.385 "traddr": "10.0.0.3", 00:12:46.385 "trsvcid": "4420" 00:12:46.385 }, 00:12:46.385 "peer_address": { 00:12:46.385 "trtype": "TCP", 00:12:46.385 "adrfam": "IPv4", 00:12:46.385 "traddr": "10.0.0.1", 00:12:46.385 "trsvcid": "35540" 00:12:46.385 }, 00:12:46.385 "auth": { 00:12:46.385 "state": "completed", 00:12:46.385 "digest": "sha384", 00:12:46.385 "dhgroup": "ffdhe8192" 00:12:46.385 } 00:12:46.385 } 00:12:46.385 ]' 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.385 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.644 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.644 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.644 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.903 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:46.903 14:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.471 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.731 14:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.298 00:12:48.298 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.298 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.298 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.558 { 00:12:48.558 "cntlid": 93, 00:12:48.558 "qid": 0, 00:12:48.558 "state": "enabled", 00:12:48.558 "thread": "nvmf_tgt_poll_group_000", 00:12:48.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:48.558 "listen_address": { 00:12:48.558 "trtype": "TCP", 00:12:48.558 "adrfam": "IPv4", 00:12:48.558 "traddr": "10.0.0.3", 00:12:48.558 "trsvcid": "4420" 00:12:48.558 }, 00:12:48.558 "peer_address": { 00:12:48.558 "trtype": "TCP", 00:12:48.558 "adrfam": "IPv4", 00:12:48.558 "traddr": "10.0.0.1", 00:12:48.558 "trsvcid": "35568" 00:12:48.558 }, 00:12:48.558 "auth": { 00:12:48.558 "state": "completed", 00:12:48.558 "digest": "sha384", 00:12:48.558 "dhgroup": "ffdhe8192" 00:12:48.558 } 00:12:48.558 } 00:12:48.558 ]' 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.558 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.817 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.817 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.817 14:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.075 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:49.075 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.643 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.212 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.780 00:12:50.780 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.780 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.780 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.039 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.039 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.039 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.039 14:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.039 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.039 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.039 { 00:12:51.039 "cntlid": 95, 00:12:51.039 "qid": 0, 00:12:51.039 "state": "enabled", 00:12:51.039 "thread": "nvmf_tgt_poll_group_000", 00:12:51.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:51.039 "listen_address": { 00:12:51.039 "trtype": "TCP", 00:12:51.039 "adrfam": "IPv4", 00:12:51.039 "traddr": "10.0.0.3", 00:12:51.039 "trsvcid": "4420" 00:12:51.039 }, 00:12:51.039 "peer_address": { 00:12:51.039 "trtype": "TCP", 00:12:51.040 "adrfam": "IPv4", 00:12:51.040 "traddr": "10.0.0.1", 00:12:51.040 "trsvcid": "35586" 00:12:51.040 }, 00:12:51.040 "auth": { 00:12:51.040 "state": "completed", 00:12:51.040 "digest": "sha384", 00:12:51.040 "dhgroup": "ffdhe8192" 00:12:51.040 } 00:12:51.040 } 00:12:51.040 ]' 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.040 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.299 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:51.299 14:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.237 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.804 00:12:52.804 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.804 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.804 14:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.064 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.064 { 00:12:53.064 "cntlid": 97, 00:12:53.064 "qid": 0, 00:12:53.064 "state": "enabled", 00:12:53.064 "thread": "nvmf_tgt_poll_group_000", 00:12:53.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:53.065 "listen_address": { 00:12:53.065 "trtype": "TCP", 00:12:53.065 "adrfam": "IPv4", 00:12:53.065 "traddr": "10.0.0.3", 00:12:53.065 "trsvcid": "4420" 00:12:53.065 }, 00:12:53.065 "peer_address": { 00:12:53.065 "trtype": "TCP", 00:12:53.065 "adrfam": "IPv4", 00:12:53.065 "traddr": "10.0.0.1", 00:12:53.065 "trsvcid": "33084" 00:12:53.065 }, 00:12:53.065 "auth": { 00:12:53.065 "state": "completed", 00:12:53.065 "digest": "sha512", 00:12:53.065 "dhgroup": "null" 00:12:53.065 } 00:12:53.065 } 00:12:53.065 ]' 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.065 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.323 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:53.323 14:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.890 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.457 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.715 00:12:54.715 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.715 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.715 14:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.974 { 00:12:54.974 "cntlid": 99, 00:12:54.974 "qid": 0, 00:12:54.974 "state": "enabled", 00:12:54.974 "thread": "nvmf_tgt_poll_group_000", 00:12:54.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:54.974 "listen_address": { 00:12:54.974 "trtype": "TCP", 00:12:54.974 "adrfam": "IPv4", 00:12:54.974 "traddr": "10.0.0.3", 00:12:54.974 "trsvcid": "4420" 00:12:54.974 }, 00:12:54.974 "peer_address": { 00:12:54.974 "trtype": "TCP", 00:12:54.974 "adrfam": "IPv4", 00:12:54.974 "traddr": "10.0.0.1", 00:12:54.974 "trsvcid": "33110" 00:12:54.974 }, 00:12:54.974 "auth": { 00:12:54.974 "state": "completed", 00:12:54.974 "digest": "sha512", 00:12:54.974 "dhgroup": "null" 00:12:54.974 } 00:12:54.974 } 00:12:54.974 ]' 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.974 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.233 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:55.233 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.801 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.369 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:56.369 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.369 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.369 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:56.369 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.370 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.370 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.961 { 00:12:56.961 "cntlid": 101, 00:12:56.961 "qid": 0, 00:12:56.961 "state": "enabled", 00:12:56.961 "thread": "nvmf_tgt_poll_group_000", 00:12:56.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:56.961 "listen_address": { 00:12:56.961 "trtype": "TCP", 00:12:56.961 "adrfam": "IPv4", 00:12:56.961 "traddr": "10.0.0.3", 00:12:56.961 "trsvcid": "4420" 00:12:56.961 }, 00:12:56.961 "peer_address": { 00:12:56.961 "trtype": "TCP", 00:12:56.961 "adrfam": "IPv4", 00:12:56.961 "traddr": "10.0.0.1", 00:12:56.961 "trsvcid": "33136" 00:12:56.961 }, 00:12:56.961 "auth": { 00:12:56.961 "state": "completed", 00:12:56.961 "digest": "sha512", 00:12:56.961 "dhgroup": "null" 00:12:56.961 } 00:12:56.961 } 00:12:56.961 ]' 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:56.961 14:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.961 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.961 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.961 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.221 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:57.221 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:12:57.788 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:57.789 14:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.047 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.306 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.306 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.306 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.306 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.564 00:12:58.564 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.564 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.564 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.823 { 00:12:58.823 "cntlid": 103, 00:12:58.823 "qid": 0, 00:12:58.823 "state": "enabled", 00:12:58.823 "thread": "nvmf_tgt_poll_group_000", 00:12:58.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:12:58.823 "listen_address": { 00:12:58.823 "trtype": "TCP", 00:12:58.823 "adrfam": "IPv4", 00:12:58.823 "traddr": "10.0.0.3", 00:12:58.823 "trsvcid": "4420" 00:12:58.823 }, 00:12:58.823 "peer_address": { 00:12:58.823 "trtype": "TCP", 00:12:58.823 "adrfam": "IPv4", 00:12:58.823 "traddr": "10.0.0.1", 00:12:58.823 "trsvcid": "33160" 00:12:58.823 }, 00:12:58.823 "auth": { 00:12:58.823 "state": "completed", 00:12:58.823 "digest": "sha512", 00:12:58.823 "dhgroup": "null" 00:12:58.823 } 00:12:58.823 } 00:12:58.823 ]' 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:58.823 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.823 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.823 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.823 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.391 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:59.391 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.959 14:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.218 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.478 00:13:00.478 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.478 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.478 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.738 { 00:13:00.738 "cntlid": 105, 00:13:00.738 "qid": 0, 00:13:00.738 "state": "enabled", 00:13:00.738 "thread": "nvmf_tgt_poll_group_000", 00:13:00.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:00.738 "listen_address": { 00:13:00.738 "trtype": "TCP", 00:13:00.738 "adrfam": "IPv4", 00:13:00.738 "traddr": "10.0.0.3", 00:13:00.738 "trsvcid": "4420" 00:13:00.738 }, 00:13:00.738 "peer_address": { 00:13:00.738 "trtype": "TCP", 00:13:00.738 "adrfam": "IPv4", 00:13:00.738 "traddr": "10.0.0.1", 00:13:00.738 "trsvcid": "33192" 00:13:00.738 }, 00:13:00.738 "auth": { 00:13:00.738 "state": "completed", 00:13:00.738 "digest": "sha512", 00:13:00.738 "dhgroup": "ffdhe2048" 00:13:00.738 } 00:13:00.738 } 00:13:00.738 ]' 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.738 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.997 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.997 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.997 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.997 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.997 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.256 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:01.256 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.823 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.083 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.650 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.650 { 00:13:02.650 "cntlid": 107, 00:13:02.650 "qid": 0, 00:13:02.650 "state": "enabled", 00:13:02.650 "thread": "nvmf_tgt_poll_group_000", 00:13:02.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:02.650 "listen_address": { 00:13:02.650 "trtype": "TCP", 00:13:02.650 "adrfam": "IPv4", 00:13:02.650 "traddr": "10.0.0.3", 00:13:02.650 "trsvcid": "4420" 00:13:02.650 }, 00:13:02.650 "peer_address": { 00:13:02.650 "trtype": "TCP", 00:13:02.650 "adrfam": "IPv4", 00:13:02.650 "traddr": "10.0.0.1", 00:13:02.650 "trsvcid": "33872" 00:13:02.650 }, 00:13:02.650 "auth": { 00:13:02.650 "state": "completed", 00:13:02.650 "digest": "sha512", 00:13:02.650 "dhgroup": "ffdhe2048" 00:13:02.650 } 00:13:02.650 } 00:13:02.650 ]' 00:13:02.650 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.910 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.168 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:03.168 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.734 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.302 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.561 00:13:04.561 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.561 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.561 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.820 { 00:13:04.820 "cntlid": 109, 00:13:04.820 "qid": 0, 00:13:04.820 "state": "enabled", 00:13:04.820 "thread": "nvmf_tgt_poll_group_000", 00:13:04.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:04.820 "listen_address": { 00:13:04.820 "trtype": "TCP", 00:13:04.820 "adrfam": "IPv4", 00:13:04.820 "traddr": "10.0.0.3", 00:13:04.820 "trsvcid": "4420" 00:13:04.820 }, 00:13:04.820 "peer_address": { 00:13:04.820 "trtype": "TCP", 00:13:04.820 "adrfam": "IPv4", 00:13:04.820 "traddr": "10.0.0.1", 00:13:04.820 "trsvcid": "33916" 00:13:04.820 }, 00:13:04.820 "auth": { 00:13:04.820 "state": "completed", 00:13:04.820 "digest": "sha512", 00:13:04.820 "dhgroup": "ffdhe2048" 00:13:04.820 } 00:13:04.820 } 00:13:04.820 ]' 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.820 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.079 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.079 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.079 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.338 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:05.339 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.906 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:06.165 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.166 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.425 00:13:06.425 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.425 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.425 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.684 { 00:13:06.684 "cntlid": 111, 00:13:06.684 "qid": 0, 00:13:06.684 "state": "enabled", 00:13:06.684 "thread": "nvmf_tgt_poll_group_000", 00:13:06.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:06.684 "listen_address": { 00:13:06.684 "trtype": "TCP", 00:13:06.684 "adrfam": "IPv4", 00:13:06.684 "traddr": "10.0.0.3", 00:13:06.684 "trsvcid": "4420" 00:13:06.684 }, 00:13:06.684 "peer_address": { 00:13:06.684 "trtype": "TCP", 00:13:06.684 "adrfam": "IPv4", 00:13:06.684 "traddr": "10.0.0.1", 00:13:06.684 "trsvcid": "33942" 00:13:06.684 }, 00:13:06.684 "auth": { 00:13:06.684 "state": "completed", 00:13:06.684 "digest": "sha512", 00:13:06.684 "dhgroup": "ffdhe2048" 00:13:06.684 } 00:13:06.684 } 00:13:06.684 ]' 00:13:06.684 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.943 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.218 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:07.218 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.798 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.058 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.317 00:13:08.317 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.317 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.317 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.576 { 00:13:08.576 "cntlid": 113, 00:13:08.576 "qid": 0, 00:13:08.576 "state": "enabled", 00:13:08.576 "thread": "nvmf_tgt_poll_group_000", 00:13:08.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:08.576 "listen_address": { 00:13:08.576 "trtype": "TCP", 00:13:08.576 "adrfam": "IPv4", 00:13:08.576 "traddr": "10.0.0.3", 00:13:08.576 "trsvcid": "4420" 00:13:08.576 }, 00:13:08.576 "peer_address": { 00:13:08.576 "trtype": "TCP", 00:13:08.576 "adrfam": "IPv4", 00:13:08.576 "traddr": "10.0.0.1", 00:13:08.576 "trsvcid": "33956" 00:13:08.576 }, 00:13:08.576 "auth": { 00:13:08.576 "state": "completed", 00:13:08.576 "digest": "sha512", 00:13:08.576 "dhgroup": "ffdhe3072" 00:13:08.576 } 00:13:08.576 } 00:13:08.576 ]' 00:13:08.576 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.835 14:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.095 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:09.095 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.662 14:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.230 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.490 00:13:10.490 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.490 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.490 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.748 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.748 { 00:13:10.748 "cntlid": 115, 00:13:10.748 "qid": 0, 00:13:10.748 "state": "enabled", 00:13:10.748 "thread": "nvmf_tgt_poll_group_000", 00:13:10.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:10.748 "listen_address": { 00:13:10.748 "trtype": "TCP", 00:13:10.748 "adrfam": "IPv4", 00:13:10.748 "traddr": "10.0.0.3", 00:13:10.748 "trsvcid": "4420" 00:13:10.748 }, 00:13:10.748 "peer_address": { 00:13:10.748 "trtype": "TCP", 00:13:10.748 "adrfam": "IPv4", 00:13:10.748 "traddr": "10.0.0.1", 00:13:10.749 "trsvcid": "33984" 00:13:10.749 }, 00:13:10.749 "auth": { 00:13:10.749 "state": "completed", 00:13:10.749 "digest": "sha512", 00:13:10.749 "dhgroup": "ffdhe3072" 00:13:10.749 } 00:13:10.749 } 00:13:10.749 ]' 00:13:10.749 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.749 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.749 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.749 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:10.749 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.007 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.007 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.007 14:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.264 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:11.264 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.198 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.457 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.457 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.457 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.457 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.717 00:13:12.717 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.717 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.717 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.976 { 00:13:12.976 "cntlid": 117, 00:13:12.976 "qid": 0, 00:13:12.976 "state": "enabled", 00:13:12.976 "thread": "nvmf_tgt_poll_group_000", 00:13:12.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:12.976 "listen_address": { 00:13:12.976 "trtype": "TCP", 00:13:12.976 "adrfam": "IPv4", 00:13:12.976 "traddr": "10.0.0.3", 00:13:12.976 "trsvcid": "4420" 00:13:12.976 }, 00:13:12.976 "peer_address": { 00:13:12.976 "trtype": "TCP", 00:13:12.976 "adrfam": "IPv4", 00:13:12.976 "traddr": "10.0.0.1", 00:13:12.976 "trsvcid": "33184" 00:13:12.976 }, 00:13:12.976 "auth": { 00:13:12.976 "state": "completed", 00:13:12.976 "digest": "sha512", 00:13:12.976 "dhgroup": "ffdhe3072" 00:13:12.976 } 00:13:12.976 } 00:13:12.976 ]' 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.976 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.233 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.233 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.233 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.233 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.233 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.491 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:13.491 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.060 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.319 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.888 00:13:14.888 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.888 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.888 14:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.888 { 00:13:14.888 "cntlid": 119, 00:13:14.888 "qid": 0, 00:13:14.888 "state": "enabled", 00:13:14.888 "thread": "nvmf_tgt_poll_group_000", 00:13:14.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:14.888 "listen_address": { 00:13:14.888 "trtype": "TCP", 00:13:14.888 "adrfam": "IPv4", 00:13:14.888 "traddr": "10.0.0.3", 00:13:14.888 "trsvcid": "4420" 00:13:14.888 }, 00:13:14.888 "peer_address": { 00:13:14.888 "trtype": "TCP", 00:13:14.888 "adrfam": "IPv4", 00:13:14.888 "traddr": "10.0.0.1", 00:13:14.888 "trsvcid": "33216" 00:13:14.888 }, 00:13:14.888 "auth": { 00:13:14.888 "state": "completed", 00:13:14.888 "digest": "sha512", 00:13:14.888 "dhgroup": "ffdhe3072" 00:13:14.888 } 00:13:14.888 } 00:13:14.888 ]' 00:13:14.888 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.147 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.406 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:15.406 14:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:15.974 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.543 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.802 00:13:16.802 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.802 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.802 14:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.061 { 00:13:17.061 "cntlid": 121, 00:13:17.061 "qid": 0, 00:13:17.061 "state": "enabled", 00:13:17.061 "thread": "nvmf_tgt_poll_group_000", 00:13:17.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:17.061 "listen_address": { 00:13:17.061 "trtype": "TCP", 00:13:17.061 "adrfam": "IPv4", 00:13:17.061 "traddr": "10.0.0.3", 00:13:17.061 "trsvcid": "4420" 00:13:17.061 }, 00:13:17.061 "peer_address": { 00:13:17.061 "trtype": "TCP", 00:13:17.061 "adrfam": "IPv4", 00:13:17.061 "traddr": "10.0.0.1", 00:13:17.061 "trsvcid": "33256" 00:13:17.061 }, 00:13:17.061 "auth": { 00:13:17.061 "state": "completed", 00:13:17.061 "digest": "sha512", 00:13:17.061 "dhgroup": "ffdhe4096" 00:13:17.061 } 00:13:17.061 } 00:13:17.061 ]' 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.061 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.629 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:17.629 14:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.218 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.497 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.755 00:13:18.755 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.755 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.755 14:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.014 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.014 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.014 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.014 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.015 { 00:13:19.015 "cntlid": 123, 00:13:19.015 "qid": 0, 00:13:19.015 "state": "enabled", 00:13:19.015 "thread": "nvmf_tgt_poll_group_000", 00:13:19.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:19.015 "listen_address": { 00:13:19.015 "trtype": "TCP", 00:13:19.015 "adrfam": "IPv4", 00:13:19.015 "traddr": "10.0.0.3", 00:13:19.015 "trsvcid": "4420" 00:13:19.015 }, 00:13:19.015 "peer_address": { 00:13:19.015 "trtype": "TCP", 00:13:19.015 "adrfam": "IPv4", 00:13:19.015 "traddr": "10.0.0.1", 00:13:19.015 "trsvcid": "33272" 00:13:19.015 }, 00:13:19.015 "auth": { 00:13:19.015 "state": "completed", 00:13:19.015 "digest": "sha512", 00:13:19.015 "dhgroup": "ffdhe4096" 00:13:19.015 } 00:13:19.015 } 00:13:19.015 ]' 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.015 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.274 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.274 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.274 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.533 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:19.533 14:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:20.100 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.101 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.360 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.927 00:13:20.927 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.927 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.927 14:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.187 { 00:13:21.187 "cntlid": 125, 00:13:21.187 "qid": 0, 00:13:21.187 "state": "enabled", 00:13:21.187 "thread": "nvmf_tgt_poll_group_000", 00:13:21.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:21.187 "listen_address": { 00:13:21.187 "trtype": "TCP", 00:13:21.187 "adrfam": "IPv4", 00:13:21.187 "traddr": "10.0.0.3", 00:13:21.187 "trsvcid": "4420" 00:13:21.187 }, 00:13:21.187 "peer_address": { 00:13:21.187 "trtype": "TCP", 00:13:21.187 "adrfam": "IPv4", 00:13:21.187 "traddr": "10.0.0.1", 00:13:21.187 "trsvcid": "33306" 00:13:21.187 }, 00:13:21.187 "auth": { 00:13:21.187 "state": "completed", 00:13:21.187 "digest": "sha512", 00:13:21.187 "dhgroup": "ffdhe4096" 00:13:21.187 } 00:13:21.187 } 00:13:21.187 ]' 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.187 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.755 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:21.755 14:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.324 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.583 14:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.843 00:13:22.843 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.843 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.843 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.415 { 00:13:23.415 "cntlid": 127, 00:13:23.415 "qid": 0, 00:13:23.415 "state": "enabled", 00:13:23.415 "thread": "nvmf_tgt_poll_group_000", 00:13:23.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:23.415 "listen_address": { 00:13:23.415 "trtype": "TCP", 00:13:23.415 "adrfam": "IPv4", 00:13:23.415 "traddr": "10.0.0.3", 00:13:23.415 "trsvcid": "4420" 00:13:23.415 }, 00:13:23.415 "peer_address": { 00:13:23.415 "trtype": "TCP", 00:13:23.415 "adrfam": "IPv4", 00:13:23.415 "traddr": "10.0.0.1", 00:13:23.415 "trsvcid": "34580" 00:13:23.415 }, 00:13:23.415 "auth": { 00:13:23.415 "state": "completed", 00:13:23.415 "digest": "sha512", 00:13:23.415 "dhgroup": "ffdhe4096" 00:13:23.415 } 00:13:23.415 } 00:13:23.415 ]' 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.415 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.674 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:23.674 14:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.243 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.811 14:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.070 00:13:25.070 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.070 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.070 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.330 { 00:13:25.330 "cntlid": 129, 00:13:25.330 "qid": 0, 00:13:25.330 "state": "enabled", 00:13:25.330 "thread": "nvmf_tgt_poll_group_000", 00:13:25.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:25.330 "listen_address": { 00:13:25.330 "trtype": "TCP", 00:13:25.330 "adrfam": "IPv4", 00:13:25.330 "traddr": "10.0.0.3", 00:13:25.330 "trsvcid": "4420" 00:13:25.330 }, 00:13:25.330 "peer_address": { 00:13:25.330 "trtype": "TCP", 00:13:25.330 "adrfam": "IPv4", 00:13:25.330 "traddr": "10.0.0.1", 00:13:25.330 "trsvcid": "34596" 00:13:25.330 }, 00:13:25.330 "auth": { 00:13:25.330 "state": "completed", 00:13:25.330 "digest": "sha512", 00:13:25.330 "dhgroup": "ffdhe6144" 00:13:25.330 } 00:13:25.330 } 00:13:25.330 ]' 00:13:25.330 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.589 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.848 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:25.848 14:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.785 14:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.353 00:13:27.353 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.353 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.353 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.612 { 00:13:27.612 "cntlid": 131, 00:13:27.612 "qid": 0, 00:13:27.612 "state": "enabled", 00:13:27.612 "thread": "nvmf_tgt_poll_group_000", 00:13:27.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:27.612 "listen_address": { 00:13:27.612 "trtype": "TCP", 00:13:27.612 "adrfam": "IPv4", 00:13:27.612 "traddr": "10.0.0.3", 00:13:27.612 "trsvcid": "4420" 00:13:27.612 }, 00:13:27.612 "peer_address": { 00:13:27.612 "trtype": "TCP", 00:13:27.612 "adrfam": "IPv4", 00:13:27.612 "traddr": "10.0.0.1", 00:13:27.612 "trsvcid": "34618" 00:13:27.612 }, 00:13:27.612 "auth": { 00:13:27.612 "state": "completed", 00:13:27.612 "digest": "sha512", 00:13:27.612 "dhgroup": "ffdhe6144" 00:13:27.612 } 00:13:27.612 } 00:13:27.612 ]' 00:13:27.612 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.870 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.870 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.870 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.870 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.870 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.871 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.871 14:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.130 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:28.130 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.717 14:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.982 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.550 00:13:29.550 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.550 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.551 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.809 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.809 { 00:13:29.809 "cntlid": 133, 00:13:29.810 "qid": 0, 00:13:29.810 "state": "enabled", 00:13:29.810 "thread": "nvmf_tgt_poll_group_000", 00:13:29.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:29.810 "listen_address": { 00:13:29.810 "trtype": "TCP", 00:13:29.810 "adrfam": "IPv4", 00:13:29.810 "traddr": "10.0.0.3", 00:13:29.810 "trsvcid": "4420" 00:13:29.810 }, 00:13:29.810 "peer_address": { 00:13:29.810 "trtype": "TCP", 00:13:29.810 "adrfam": "IPv4", 00:13:29.810 "traddr": "10.0.0.1", 00:13:29.810 "trsvcid": "34636" 00:13:29.810 }, 00:13:29.810 "auth": { 00:13:29.810 "state": "completed", 00:13:29.810 "digest": "sha512", 00:13:29.810 "dhgroup": "ffdhe6144" 00:13:29.810 } 00:13:29.810 } 00:13:29.810 ]' 00:13:29.810 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.810 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.810 14:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.069 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:30.069 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.069 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.069 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.069 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.327 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:30.327 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.895 14:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.154 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.721 00:13:31.721 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.721 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.721 14:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.980 { 00:13:31.980 "cntlid": 135, 00:13:31.980 "qid": 0, 00:13:31.980 "state": "enabled", 00:13:31.980 "thread": "nvmf_tgt_poll_group_000", 00:13:31.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:31.980 "listen_address": { 00:13:31.980 "trtype": "TCP", 00:13:31.980 "adrfam": "IPv4", 00:13:31.980 "traddr": "10.0.0.3", 00:13:31.980 "trsvcid": "4420" 00:13:31.980 }, 00:13:31.980 "peer_address": { 00:13:31.980 "trtype": "TCP", 00:13:31.980 "adrfam": "IPv4", 00:13:31.980 "traddr": "10.0.0.1", 00:13:31.980 "trsvcid": "34660" 00:13:31.980 }, 00:13:31.980 "auth": { 00:13:31.980 "state": "completed", 00:13:31.980 "digest": "sha512", 00:13:31.980 "dhgroup": "ffdhe6144" 00:13:31.980 } 00:13:31.980 } 00:13:31.980 ]' 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.980 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.547 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:32.547 14:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.113 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.372 14:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.939 00:13:33.939 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.939 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.939 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.199 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.199 { 00:13:34.199 "cntlid": 137, 00:13:34.199 "qid": 0, 00:13:34.199 "state": "enabled", 00:13:34.199 "thread": "nvmf_tgt_poll_group_000", 00:13:34.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:34.199 "listen_address": { 00:13:34.199 "trtype": "TCP", 00:13:34.199 "adrfam": "IPv4", 00:13:34.199 "traddr": "10.0.0.3", 00:13:34.199 "trsvcid": "4420" 00:13:34.199 }, 00:13:34.199 "peer_address": { 00:13:34.199 "trtype": "TCP", 00:13:34.199 "adrfam": "IPv4", 00:13:34.199 "traddr": "10.0.0.1", 00:13:34.199 "trsvcid": "56294" 00:13:34.199 }, 00:13:34.199 "auth": { 00:13:34.199 "state": "completed", 00:13:34.199 "digest": "sha512", 00:13:34.199 "dhgroup": "ffdhe8192" 00:13:34.199 } 00:13:34.199 } 00:13:34.199 ]' 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.458 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.716 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:34.716 14:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.652 14:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.912 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:36.847 00:13:36.847 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.847 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.847 14:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.106 { 00:13:37.106 "cntlid": 139, 00:13:37.106 "qid": 0, 00:13:37.106 "state": "enabled", 00:13:37.106 "thread": "nvmf_tgt_poll_group_000", 00:13:37.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:37.106 "listen_address": { 00:13:37.106 "trtype": "TCP", 00:13:37.106 "adrfam": "IPv4", 00:13:37.106 "traddr": "10.0.0.3", 00:13:37.106 "trsvcid": "4420" 00:13:37.106 }, 00:13:37.106 "peer_address": { 00:13:37.106 "trtype": "TCP", 00:13:37.106 "adrfam": "IPv4", 00:13:37.106 "traddr": "10.0.0.1", 00:13:37.106 "trsvcid": "56330" 00:13:37.106 }, 00:13:37.106 "auth": { 00:13:37.106 "state": "completed", 00:13:37.106 "digest": "sha512", 00:13:37.106 "dhgroup": "ffdhe8192" 00:13:37.106 } 00:13:37.106 } 00:13:37.106 ]' 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.106 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.365 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.365 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.365 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.625 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:37.625 14:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: --dhchap-ctrl-secret DHHC-1:02:ZGZhNWRhZjVhNDBlNTViNjQ3MGI3N2Y4MWQxNWMxNzVkYWY0MjdiYTRlZjQwOGI0+KGN1A==: 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.560 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:38.818 14:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.754 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.754 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.013 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.013 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.013 { 00:13:40.013 "cntlid": 141, 00:13:40.013 "qid": 0, 00:13:40.013 "state": "enabled", 00:13:40.013 "thread": "nvmf_tgt_poll_group_000", 00:13:40.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:40.013 "listen_address": { 00:13:40.013 "trtype": "TCP", 00:13:40.013 "adrfam": "IPv4", 00:13:40.013 "traddr": "10.0.0.3", 00:13:40.013 "trsvcid": "4420" 00:13:40.013 }, 00:13:40.013 "peer_address": { 00:13:40.013 "trtype": "TCP", 00:13:40.013 "adrfam": "IPv4", 00:13:40.013 "traddr": "10.0.0.1", 00:13:40.013 "trsvcid": "56366" 00:13:40.013 }, 00:13:40.013 "auth": { 00:13:40.013 "state": "completed", 00:13:40.013 "digest": "sha512", 00:13:40.013 "dhgroup": "ffdhe8192" 00:13:40.013 } 00:13:40.013 } 00:13:40.013 ]' 00:13:40.013 14:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.013 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.341 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:40.341 14:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:01:MjllNWQ0NWI1NTg1NWM0OTQxZDNkYWI2ZjAzMmVmMzF6qQsl: 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.908 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.476 14:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.044 00:13:42.044 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.044 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.044 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.303 { 00:13:42.303 "cntlid": 143, 00:13:42.303 "qid": 0, 00:13:42.303 "state": "enabled", 00:13:42.303 "thread": "nvmf_tgt_poll_group_000", 00:13:42.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:42.303 "listen_address": { 00:13:42.303 "trtype": "TCP", 00:13:42.303 "adrfam": "IPv4", 00:13:42.303 "traddr": "10.0.0.3", 00:13:42.303 "trsvcid": "4420" 00:13:42.303 }, 00:13:42.303 "peer_address": { 00:13:42.303 "trtype": "TCP", 00:13:42.303 "adrfam": "IPv4", 00:13:42.303 "traddr": "10.0.0.1", 00:13:42.303 "trsvcid": "56384" 00:13:42.303 }, 00:13:42.303 "auth": { 00:13:42.303 "state": "completed", 00:13:42.303 "digest": "sha512", 00:13:42.303 "dhgroup": "ffdhe8192" 00:13:42.303 } 00:13:42.303 } 00:13:42.303 ]' 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.303 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.562 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.562 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.562 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.562 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.562 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.822 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:42.822 14:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:43.762 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.763 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.021 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.021 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.021 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.021 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.589 00:13:44.589 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.589 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.589 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.848 { 00:13:44.848 "cntlid": 145, 00:13:44.848 "qid": 0, 00:13:44.848 "state": "enabled", 00:13:44.848 "thread": "nvmf_tgt_poll_group_000", 00:13:44.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:44.848 "listen_address": { 00:13:44.848 "trtype": "TCP", 00:13:44.848 "adrfam": "IPv4", 00:13:44.848 "traddr": "10.0.0.3", 00:13:44.848 "trsvcid": "4420" 00:13:44.848 }, 00:13:44.848 "peer_address": { 00:13:44.848 "trtype": "TCP", 00:13:44.848 "adrfam": "IPv4", 00:13:44.848 "traddr": "10.0.0.1", 00:13:44.848 "trsvcid": "38770" 00:13:44.848 }, 00:13:44.848 "auth": { 00:13:44.848 "state": "completed", 00:13:44.848 "digest": "sha512", 00:13:44.848 "dhgroup": "ffdhe8192" 00:13:44.848 } 00:13:44.848 } 00:13:44.848 ]' 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.848 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.416 14:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:45.416 14:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:00:MGY0NjQ1ZjQ3MGRiYWU2MDI2ZDJmNGE5ZjhjMDViZTllMzJhYmRjODY5MzAwYjNk7L5REg==: --dhchap-ctrl-secret DHHC-1:03:NWZjNWVjZmY3MmM0ZGYwNTJkMWI5NDcxZjdlMjhmZDE4NTBiOGNmZGZlZjg4MGVmNzQ4ZDdhZWFkMDllNDcxYYTap7s=: 00:13:45.984 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.984 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:45.984 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:45.985 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:46.919 request: 00:13:46.919 { 00:13:46.919 "name": "nvme0", 00:13:46.919 "trtype": "tcp", 00:13:46.919 "traddr": "10.0.0.3", 00:13:46.919 "adrfam": "ipv4", 00:13:46.919 "trsvcid": "4420", 00:13:46.919 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:46.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:46.919 "prchk_reftag": false, 00:13:46.919 "prchk_guard": false, 00:13:46.919 "hdgst": false, 00:13:46.919 "ddgst": false, 00:13:46.919 "dhchap_key": "key2", 00:13:46.919 "allow_unrecognized_csi": false, 00:13:46.919 "method": "bdev_nvme_attach_controller", 00:13:46.919 "req_id": 1 00:13:46.919 } 00:13:46.919 Got JSON-RPC error response 00:13:46.919 response: 00:13:46.919 { 00:13:46.919 "code": -5, 00:13:46.919 "message": "Input/output error" 00:13:46.919 } 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.919 14:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.485 request: 00:13:47.485 { 00:13:47.485 "name": "nvme0", 00:13:47.485 "trtype": "tcp", 00:13:47.485 "traddr": "10.0.0.3", 00:13:47.485 "adrfam": "ipv4", 00:13:47.485 "trsvcid": "4420", 00:13:47.485 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:47.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:47.485 "prchk_reftag": false, 00:13:47.485 "prchk_guard": false, 00:13:47.485 "hdgst": false, 00:13:47.485 "ddgst": false, 00:13:47.485 "dhchap_key": "key1", 00:13:47.485 "dhchap_ctrlr_key": "ckey2", 00:13:47.485 "allow_unrecognized_csi": false, 00:13:47.485 "method": "bdev_nvme_attach_controller", 00:13:47.485 "req_id": 1 00:13:47.485 } 00:13:47.485 Got JSON-RPC error response 00:13:47.485 response: 00:13:47.485 { 00:13:47.485 "code": -5, 00:13:47.485 "message": "Input/output error" 00:13:47.485 } 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.485 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.486 14:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.054 request: 00:13:48.054 { 00:13:48.054 "name": "nvme0", 00:13:48.054 "trtype": "tcp", 00:13:48.054 "traddr": "10.0.0.3", 00:13:48.054 "adrfam": "ipv4", 00:13:48.054 "trsvcid": "4420", 00:13:48.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:48.054 "prchk_reftag": false, 00:13:48.054 "prchk_guard": false, 00:13:48.054 "hdgst": false, 00:13:48.054 "ddgst": false, 00:13:48.054 "dhchap_key": "key1", 00:13:48.054 "dhchap_ctrlr_key": "ckey1", 00:13:48.054 "allow_unrecognized_csi": false, 00:13:48.054 "method": "bdev_nvme_attach_controller", 00:13:48.054 "req_id": 1 00:13:48.054 } 00:13:48.054 Got JSON-RPC error response 00:13:48.054 response: 00:13:48.054 { 00:13:48.054 "code": -5, 00:13:48.054 "message": "Input/output error" 00:13:48.054 } 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80831 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80831 ']' 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80831 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80831 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.054 killing process with pid 80831 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80831' 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80831 00:13:48.054 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80831 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83907 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83907 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83907 ']' 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.313 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83907 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83907 ']' 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.572 14:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.140 null0 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tMP 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.140 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.N14 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N14 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.z6K 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.rVC ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rVC 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UzL 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.rJH ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rJH 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.O8J 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.141 14:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.077 nvme0n1 00:13:50.077 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.077 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.077 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.645 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.646 { 00:13:50.646 "cntlid": 1, 00:13:50.646 "qid": 0, 00:13:50.646 "state": "enabled", 00:13:50.646 "thread": "nvmf_tgt_poll_group_000", 00:13:50.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:50.646 "listen_address": { 00:13:50.646 "trtype": "TCP", 00:13:50.646 "adrfam": "IPv4", 00:13:50.646 "traddr": "10.0.0.3", 00:13:50.646 "trsvcid": "4420" 00:13:50.646 }, 00:13:50.646 "peer_address": { 00:13:50.646 "trtype": "TCP", 00:13:50.646 "adrfam": "IPv4", 00:13:50.646 "traddr": "10.0.0.1", 00:13:50.646 "trsvcid": "38814" 00:13:50.646 }, 00:13:50.646 "auth": { 00:13:50.646 "state": "completed", 00:13:50.646 "digest": "sha512", 00:13:50.646 "dhgroup": "ffdhe8192" 00:13:50.646 } 00:13:50.646 } 00:13:50.646 ]' 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.646 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.905 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:50.905 14:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key3 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:51.473 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.737 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.035 request: 00:13:52.035 { 00:13:52.035 "name": "nvme0", 00:13:52.035 "trtype": "tcp", 00:13:52.035 "traddr": "10.0.0.3", 00:13:52.035 "adrfam": "ipv4", 00:13:52.035 "trsvcid": "4420", 00:13:52.035 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:52.035 "prchk_reftag": false, 00:13:52.035 "prchk_guard": false, 00:13:52.035 "hdgst": false, 00:13:52.035 "ddgst": false, 00:13:52.035 "dhchap_key": "key3", 00:13:52.035 "allow_unrecognized_csi": false, 00:13:52.035 "method": "bdev_nvme_attach_controller", 00:13:52.035 "req_id": 1 00:13:52.035 } 00:13:52.035 Got JSON-RPC error response 00:13:52.035 response: 00:13:52.035 { 00:13:52.035 "code": -5, 00:13:52.035 "message": "Input/output error" 00:13:52.035 } 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:52.035 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.294 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.553 request: 00:13:52.553 { 00:13:52.553 "name": "nvme0", 00:13:52.553 "trtype": "tcp", 00:13:52.553 "traddr": "10.0.0.3", 00:13:52.553 "adrfam": "ipv4", 00:13:52.553 "trsvcid": "4420", 00:13:52.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:52.553 "prchk_reftag": false, 00:13:52.553 "prchk_guard": false, 00:13:52.553 "hdgst": false, 00:13:52.553 "ddgst": false, 00:13:52.553 "dhchap_key": "key3", 00:13:52.553 "allow_unrecognized_csi": false, 00:13:52.553 "method": "bdev_nvme_attach_controller", 00:13:52.553 "req_id": 1 00:13:52.553 } 00:13:52.553 Got JSON-RPC error response 00:13:52.553 response: 00:13:52.553 { 00:13:52.553 "code": -5, 00:13:52.553 "message": "Input/output error" 00:13:52.553 } 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.553 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.812 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.812 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.387 request: 00:13:53.387 { 00:13:53.387 "name": "nvme0", 00:13:53.387 "trtype": "tcp", 00:13:53.387 "traddr": "10.0.0.3", 00:13:53.387 "adrfam": "ipv4", 00:13:53.387 "trsvcid": "4420", 00:13:53.387 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:53.387 "prchk_reftag": false, 00:13:53.387 "prchk_guard": false, 00:13:53.387 "hdgst": false, 00:13:53.387 "ddgst": false, 00:13:53.387 "dhchap_key": "key0", 00:13:53.387 "dhchap_ctrlr_key": "key1", 00:13:53.387 "allow_unrecognized_csi": false, 00:13:53.387 "method": "bdev_nvme_attach_controller", 00:13:53.387 "req_id": 1 00:13:53.387 } 00:13:53.387 Got JSON-RPC error response 00:13:53.387 response: 00:13:53.387 { 00:13:53.387 "code": -5, 00:13:53.387 "message": "Input/output error" 00:13:53.387 } 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:53.387 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:53.646 nvme0n1 00:13:53.646 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:53.646 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.646 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:53.905 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.905 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.905 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:54.164 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:55.100 nvme0n1 00:13:55.100 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:55.100 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:55.100 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.358 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.358 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.358 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.359 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.359 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.359 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:55.359 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:55.359 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.617 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.617 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:55.617 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid 63735ac0-cf43-4c13-880c-ea4676416181 -l 0 --dhchap-secret DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: --dhchap-ctrl-secret DHHC-1:03:ZDQ2YzRhOGIwNmE1MzgzNDMwZTRhNTVkNWM1MmEyZWQzZTFjMzNlYWI5NTczOTk4NDI5NzljYzM1MGVlNWY1YdaolEg=: 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:56.553 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:56.554 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:57.121 request: 00:13:57.121 { 00:13:57.121 "name": "nvme0", 00:13:57.121 "trtype": "tcp", 00:13:57.121 "traddr": "10.0.0.3", 00:13:57.121 "adrfam": "ipv4", 00:13:57.121 "trsvcid": "4420", 00:13:57.121 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181", 00:13:57.121 "prchk_reftag": false, 00:13:57.121 "prchk_guard": false, 00:13:57.121 "hdgst": false, 00:13:57.121 "ddgst": false, 00:13:57.121 "dhchap_key": "key1", 00:13:57.121 "allow_unrecognized_csi": false, 00:13:57.121 "method": "bdev_nvme_attach_controller", 00:13:57.121 "req_id": 1 00:13:57.121 } 00:13:57.121 Got JSON-RPC error response 00:13:57.121 response: 00:13:57.121 { 00:13:57.121 "code": -5, 00:13:57.121 "message": "Input/output error" 00:13:57.121 } 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.121 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.055 nvme0n1 00:13:58.056 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:58.056 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.056 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:58.623 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:59.190 nvme0n1 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.190 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.757 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: '' 2s 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: ]] 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjlmMmJmZmM5NTNiYTI4MTk4Zjg1MWRiMDcyNDdhMzmIs5Z3: 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:59.758 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: 2s 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: ]] 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2IyYzQ2MDAyNGE3ODkzMTRjYjJjN2VhZGM4NDU5OWRjOWIwMzliMjVkODE1NmMxeRBXow==: 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:01.661 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:03.584 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.847 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:04.782 nvme0n1 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.782 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:05.349 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:05.349 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.349 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:05.918 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:06.176 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:06.176 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.176 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.436 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:07.372 request: 00:14:07.372 { 00:14:07.372 "name": "nvme0", 00:14:07.372 "dhchap_key": "key1", 00:14:07.372 "dhchap_ctrlr_key": "key3", 00:14:07.372 "method": "bdev_nvme_set_keys", 00:14:07.372 "req_id": 1 00:14:07.372 } 00:14:07.372 Got JSON-RPC error response 00:14:07.372 response: 00:14:07.372 { 00:14:07.372 "code": -13, 00:14:07.372 "message": "Permission denied" 00:14:07.372 } 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:07.372 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.632 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:07.632 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:08.569 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:08.569 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:08.569 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.828 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.765 nvme0n1 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.765 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:10.332 request: 00:14:10.333 { 00:14:10.333 "name": "nvme0", 00:14:10.333 "dhchap_key": "key2", 00:14:10.333 "dhchap_ctrlr_key": "key0", 00:14:10.333 "method": "bdev_nvme_set_keys", 00:14:10.333 "req_id": 1 00:14:10.333 } 00:14:10.333 Got JSON-RPC error response 00:14:10.333 response: 00:14:10.333 { 00:14:10.333 "code": -13, 00:14:10.333 "message": "Permission denied" 00:14:10.333 } 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:10.592 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.850 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:10.850 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:11.788 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:11.788 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:11.788 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.046 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:12.046 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:12.046 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80856 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80856 ']' 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80856 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80856 00:14:12.047 killing process with pid 80856 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80856' 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80856 00:14:12.047 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80856 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.305 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.305 rmmod nvme_tcp 00:14:12.564 rmmod nvme_fabrics 00:14:12.564 rmmod nvme_keyring 00:14:12.564 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 83907 ']' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 83907 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83907 ']' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83907 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83907 00:14:12.565 killing process with pid 83907 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83907' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83907 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83907 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:12.565 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:12.823 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.823 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:12.823 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:12.823 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:12.823 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tMP /tmp/spdk.key-sha256.z6K /tmp/spdk.key-sha384.UzL /tmp/spdk.key-sha512.O8J /tmp/spdk.key-sha512.N14 /tmp/spdk.key-sha384.rVC /tmp/spdk.key-sha256.rJH '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:12.824 ************************************ 00:14:12.824 END TEST nvmf_auth_target 00:14:12.824 ************************************ 00:14:12.824 00:14:12.824 real 3m9.882s 00:14:12.824 user 7m36.929s 00:14:12.824 sys 0m27.930s 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.824 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.824 14:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:12.824 14:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:12.824 14:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:12.824 14:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.824 14:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.089 ************************************ 00:14:13.089 START TEST nvmf_bdevio_no_huge 00:14:13.089 ************************************ 00:14:13.089 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:13.089 * Looking for test storage... 00:14:13.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:13.089 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:13.089 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:13.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.090 --rc genhtml_branch_coverage=1 00:14:13.090 --rc genhtml_function_coverage=1 00:14:13.090 --rc genhtml_legend=1 00:14:13.090 --rc geninfo_all_blocks=1 00:14:13.090 --rc geninfo_unexecuted_blocks=1 00:14:13.090 00:14:13.090 ' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:13.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.090 --rc genhtml_branch_coverage=1 00:14:13.090 --rc genhtml_function_coverage=1 00:14:13.090 --rc genhtml_legend=1 00:14:13.090 --rc geninfo_all_blocks=1 00:14:13.090 --rc geninfo_unexecuted_blocks=1 00:14:13.090 00:14:13.090 ' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:13.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.090 --rc genhtml_branch_coverage=1 00:14:13.090 --rc genhtml_function_coverage=1 00:14:13.090 --rc genhtml_legend=1 00:14:13.090 --rc geninfo_all_blocks=1 00:14:13.090 --rc geninfo_unexecuted_blocks=1 00:14:13.090 00:14:13.090 ' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:13.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.090 --rc genhtml_branch_coverage=1 00:14:13.090 --rc genhtml_function_coverage=1 00:14:13.090 --rc genhtml_legend=1 00:14:13.090 --rc geninfo_all_blocks=1 00:14:13.090 --rc geninfo_unexecuted_blocks=1 00:14:13.090 00:14:13.090 ' 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:14:13.090 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:13.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:13.091 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:13.092 Cannot find device "nvmf_init_br" 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:13.092 Cannot find device "nvmf_init_br2" 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:13.092 Cannot find device "nvmf_tgt_br" 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:13.092 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.354 Cannot find device "nvmf_tgt_br2" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:13.354 Cannot find device "nvmf_init_br" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:13.354 Cannot find device "nvmf_init_br2" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:13.354 Cannot find device "nvmf_tgt_br" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:13.354 Cannot find device "nvmf_tgt_br2" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:13.354 Cannot find device "nvmf_br" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:13.354 Cannot find device "nvmf_init_if" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:13.354 Cannot find device "nvmf_init_if2" 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:13.354 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:13.355 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.613 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:13.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:14:13.614 00:14:13.614 --- 10.0.0.3 ping statistics --- 00:14:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.614 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:13.614 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:13.614 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:13.614 00:14:13.614 --- 10.0.0.4 ping statistics --- 00:14:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.614 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:13.614 00:14:13.614 --- 10.0.0.1 ping statistics --- 00:14:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.614 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:13.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:14:13.614 00:14:13.614 --- 10.0.0.2 ping statistics --- 00:14:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.614 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=84540 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 84540 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 84540 ']' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.614 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 [2024-12-16 14:30:05.716872] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:13.614 [2024-12-16 14:30:05.716988] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:13.873 [2024-12-16 14:30:05.876086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.873 [2024-12-16 14:30:05.932418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.873 [2024-12-16 14:30:05.932497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.873 [2024-12-16 14:30:05.932512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.873 [2024-12-16 14:30:05.932522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.873 [2024-12-16 14:30:05.932531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.873 [2024-12-16 14:30:05.933115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.873 [2024-12-16 14:30:05.933848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:14:13.873 [2024-12-16 14:30:05.933988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:14:13.873 [2024-12-16 14:30:05.933997] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.873 [2024-12-16 14:30:05.940127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 [2024-12-16 14:30:06.140212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 Malloc0 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 [2024-12-16 14:30:06.184888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:14.133 { 00:14:14.133 "params": { 00:14:14.133 "name": "Nvme$subsystem", 00:14:14.133 "trtype": "$TEST_TRANSPORT", 00:14:14.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:14.133 "adrfam": "ipv4", 00:14:14.133 "trsvcid": "$NVMF_PORT", 00:14:14.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:14.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:14.133 "hdgst": ${hdgst:-false}, 00:14:14.133 "ddgst": ${ddgst:-false} 00:14:14.133 }, 00:14:14.133 "method": "bdev_nvme_attach_controller" 00:14:14.133 } 00:14:14.133 EOF 00:14:14.133 )") 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:14.133 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:14.133 "params": { 00:14:14.133 "name": "Nvme1", 00:14:14.133 "trtype": "tcp", 00:14:14.133 "traddr": "10.0.0.3", 00:14:14.133 "adrfam": "ipv4", 00:14:14.133 "trsvcid": "4420", 00:14:14.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.133 "hdgst": false, 00:14:14.133 "ddgst": false 00:14:14.133 }, 00:14:14.133 "method": "bdev_nvme_attach_controller" 00:14:14.133 }' 00:14:14.133 [2024-12-16 14:30:06.244943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:14.133 [2024-12-16 14:30:06.245040] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84568 ] 00:14:14.392 [2024-12-16 14:30:06.403567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.392 [2024-12-16 14:30:06.467672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.392 [2024-12-16 14:30:06.467742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.392 [2024-12-16 14:30:06.467746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.392 [2024-12-16 14:30:06.497282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.651 I/O targets: 00:14:14.651 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:14.651 00:14:14.651 00:14:14.651 CUnit - A unit testing framework for C - Version 2.1-3 00:14:14.651 http://cunit.sourceforge.net/ 00:14:14.651 00:14:14.651 00:14:14.651 Suite: bdevio tests on: Nvme1n1 00:14:14.651 Test: blockdev write read block ...passed 00:14:14.651 Test: blockdev write zeroes read block ...passed 00:14:14.651 Test: blockdev write zeroes read no split ...passed 00:14:14.651 Test: blockdev write zeroes read split ...passed 00:14:14.651 Test: blockdev write zeroes read split partial ...passed 00:14:14.651 Test: blockdev reset ...[2024-12-16 14:30:06.777754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:14.651 [2024-12-16 14:30:06.777872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x835060 (9): Bad file descriptor 00:14:14.651 passed 00:14:14.651 Test: blockdev write read 8 blocks ...[2024-12-16 14:30:06.793466] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:14.651 passed 00:14:14.651 Test: blockdev write read size > 128k ...passed 00:14:14.651 Test: blockdev write read invalid size ...passed 00:14:14.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:14.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:14.651 Test: blockdev write read max offset ...passed 00:14:14.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:14.651 Test: blockdev writev readv 8 blocks ...passed 00:14:14.651 Test: blockdev writev readv 30 x 1block ...passed 00:14:14.651 Test: blockdev writev readv block ...passed 00:14:14.651 Test: blockdev writev readv size > 128k ...passed 00:14:14.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:14.651 Test: blockdev comparev and writev ...[2024-12-16 14:30:06.802192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.802280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.802303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.802317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.802663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.802689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.802712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.802728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.803059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.803075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.803377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.803400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.651 [2024-12-16 14:30:06.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:14.651 passed 00:14:14.651 Test: blockdev nvme passthru rw ...passed 00:14:14.651 Test: blockdev nvme passthru vendor specific ...passed 00:14:14.651 Test: blockdev nvme admin passthru ...[2024-12-16 14:30:06.804488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.651 [2024-12-16 14:30:06.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.804644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.651 [2024-12-16 14:30:06.804661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.804765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.651 [2024-12-16 14:30:06.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:14.651 [2024-12-16 14:30:06.804924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.651 [2024-12-16 14:30:06.804941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:14.651 passed 00:14:14.651 Test: blockdev copy ...passed 00:14:14.651 00:14:14.651 Run Summary: Type Total Ran Passed Failed Inactive 00:14:14.651 suites 1 1 n/a 0 0 00:14:14.651 tests 23 23 23 0 0 00:14:14.651 asserts 152 152 152 0 n/a 00:14:14.651 00:14:14.651 Elapsed time = 0.166 seconds 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:15.225 rmmod nvme_tcp 00:14:15.225 rmmod nvme_fabrics 00:14:15.225 rmmod nvme_keyring 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 84540 ']' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 84540 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 84540 ']' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 84540 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84540 00:14:15.225 killing process with pid 84540 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84540' 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 84540 00:14:15.225 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 84540 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:15.824 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.825 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:16.084 ************************************ 00:14:16.084 END TEST nvmf_bdevio_no_huge 00:14:16.084 ************************************ 00:14:16.084 00:14:16.084 real 0m2.996s 00:14:16.084 user 0m8.545s 00:14:16.084 sys 0m1.484s 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 ************************************ 00:14:16.084 START TEST nvmf_tls 00:14:16.084 ************************************ 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:16.084 * Looking for test storage... 00:14:16.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.084 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:16.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.084 --rc genhtml_branch_coverage=1 00:14:16.084 --rc genhtml_function_coverage=1 00:14:16.084 --rc genhtml_legend=1 00:14:16.084 --rc geninfo_all_blocks=1 00:14:16.084 --rc geninfo_unexecuted_blocks=1 00:14:16.084 00:14:16.085 ' 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.085 --rc genhtml_branch_coverage=1 00:14:16.085 --rc genhtml_function_coverage=1 00:14:16.085 --rc genhtml_legend=1 00:14:16.085 --rc geninfo_all_blocks=1 00:14:16.085 --rc geninfo_unexecuted_blocks=1 00:14:16.085 00:14:16.085 ' 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.085 --rc genhtml_branch_coverage=1 00:14:16.085 --rc genhtml_function_coverage=1 00:14:16.085 --rc genhtml_legend=1 00:14:16.085 --rc geninfo_all_blocks=1 00:14:16.085 --rc geninfo_unexecuted_blocks=1 00:14:16.085 00:14:16.085 ' 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:16.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.085 --rc genhtml_branch_coverage=1 00:14:16.085 --rc genhtml_function_coverage=1 00:14:16.085 --rc genhtml_legend=1 00:14:16.085 --rc geninfo_all_blocks=1 00:14:16.085 --rc geninfo_unexecuted_blocks=1 00:14:16.085 00:14:16.085 ' 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.085 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.344 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:16.345 Cannot find device "nvmf_init_br" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:16.345 Cannot find device "nvmf_init_br2" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:16.345 Cannot find device "nvmf_tgt_br" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.345 Cannot find device "nvmf_tgt_br2" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:16.345 Cannot find device "nvmf_init_br" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:16.345 Cannot find device "nvmf_init_br2" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:16.345 Cannot find device "nvmf_tgt_br" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:16.345 Cannot find device "nvmf_tgt_br2" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:16.345 Cannot find device "nvmf_br" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:16.345 Cannot find device "nvmf_init_if" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:16.345 Cannot find device "nvmf_init_if2" 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:16.345 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:16.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:14:16.605 00:14:16.605 --- 10.0.0.3 ping statistics --- 00:14:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.605 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:16.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:16.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:14:16.605 00:14:16.605 --- 10.0.0.4 ping statistics --- 00:14:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.605 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:16.605 00:14:16.605 --- 10.0.0.1 ping statistics --- 00:14:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.605 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:16.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:16.605 00:14:16.605 --- 10.0.0.2 ping statistics --- 00:14:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.605 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84806 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84806 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84806 ']' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.605 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.605 [2024-12-16 14:30:08.758947] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:16.605 [2024-12-16 14:30:08.759650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.864 [2024-12-16 14:30:08.909049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.864 [2024-12-16 14:30:08.928497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.864 [2024-12-16 14:30:08.928578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.864 [2024-12-16 14:30:08.928621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.864 [2024-12-16 14:30:08.928629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.864 [2024-12-16 14:30:08.928636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.864 [2024-12-16 14:30:08.929010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.864 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.864 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:16.864 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.864 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.864 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.124 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.124 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:17.124 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:17.382 true 00:14:17.382 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.382 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:17.641 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:17.641 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:17.641 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:17.900 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.900 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:18.159 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:18.159 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:18.159 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:18.418 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.418 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:18.677 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:18.677 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:18.677 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.935 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:19.194 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:19.194 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:19.195 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:19.452 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:19.452 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:19.710 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:19.710 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:19.710 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:19.710 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:19.710 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:19.968 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:19.969 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.D9ylwxeOk7 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.txNhJ3MWbg 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.D9ylwxeOk7 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.txNhJ3MWbg 00:14:20.227 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:20.486 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:20.744 [2024-12-16 14:30:12.784694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.744 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.D9ylwxeOk7 00:14:20.744 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D9ylwxeOk7 00:14:20.744 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:21.003 [2024-12-16 14:30:13.051120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.003 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:21.261 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:21.519 [2024-12-16 14:30:13.579272] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.519 [2024-12-16 14:30:13.579775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:21.519 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:21.777 malloc0 00:14:21.777 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:22.035 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D9ylwxeOk7 00:14:22.294 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:22.552 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.D9ylwxeOk7 00:14:34.751 Initializing NVMe Controllers 00:14:34.751 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.751 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:34.751 Initialization complete. Launching workers. 00:14:34.751 ======================================================== 00:14:34.751 Latency(us) 00:14:34.751 Device Information : IOPS MiB/s Average min max 00:14:34.751 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9059.59 35.39 7065.71 1384.81 12321.85 00:14:34.751 ======================================================== 00:14:34.751 Total : 9059.59 35.39 7065.71 1384.81 12321.85 00:14:34.751 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D9ylwxeOk7 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D9ylwxeOk7 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85037 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85037 /var/tmp/bdevperf.sock 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85037 ']' 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.751 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.751 [2024-12-16 14:30:24.921319] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:34.751 [2024-12-16 14:30:24.921659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85037 ] 00:14:34.751 [2024-12-16 14:30:25.074476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.751 [2024-12-16 14:30:25.098601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.751 [2024-12-16 14:30:25.132278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:34.751 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.751 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:34.751 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D9ylwxeOk7 00:14:34.751 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:34.751 [2024-12-16 14:30:25.751510] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.751 TLSTESTn1 00:14:34.751 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:34.751 Running I/O for 10 seconds... 00:14:36.126 4224.00 IOPS, 16.50 MiB/s [2024-12-16T14:30:29.261Z] 4319.00 IOPS, 16.87 MiB/s [2024-12-16T14:30:30.201Z] 4338.00 IOPS, 16.95 MiB/s [2024-12-16T14:30:31.138Z] 4351.75 IOPS, 17.00 MiB/s [2024-12-16T14:30:32.073Z] 4353.40 IOPS, 17.01 MiB/s [2024-12-16T14:30:33.009Z] 4362.33 IOPS, 17.04 MiB/s [2024-12-16T14:30:33.946Z] 4363.29 IOPS, 17.04 MiB/s [2024-12-16T14:30:35.328Z] 4310.25 IOPS, 16.84 MiB/s [2024-12-16T14:30:36.263Z] 4271.33 IOPS, 16.68 MiB/s [2024-12-16T14:30:36.263Z] 4242.60 IOPS, 16.57 MiB/s 00:14:44.063 Latency(us) 00:14:44.063 [2024-12-16T14:30:36.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:44.063 Verification LBA range: start 0x0 length 0x2000 00:14:44.063 TLSTESTn1 : 10.02 4248.28 16.59 0.00 0.00 30075.75 5600.35 33363.78 00:14:44.063 [2024-12-16T14:30:36.263Z] =================================================================================================================== 00:14:44.063 [2024-12-16T14:30:36.263Z] Total : 4248.28 16.59 0.00 0.00 30075.75 5600.35 33363.78 00:14:44.063 { 00:14:44.063 "results": [ 00:14:44.063 { 00:14:44.063 "job": "TLSTESTn1", 00:14:44.063 "core_mask": "0x4", 00:14:44.063 "workload": "verify", 00:14:44.063 "status": "finished", 00:14:44.064 "verify_range": { 00:14:44.064 "start": 0, 00:14:44.064 "length": 8192 00:14:44.064 }, 00:14:44.064 "queue_depth": 128, 00:14:44.064 "io_size": 4096, 00:14:44.064 "runtime": 10.016283, 00:14:44.064 "iops": 4248.282521570128, 00:14:44.064 "mibps": 16.59485359988331, 00:14:44.064 "io_failed": 0, 00:14:44.064 "io_timeout": 0, 00:14:44.064 "avg_latency_us": 30075.746503273, 00:14:44.064 "min_latency_us": 5600.349090909091, 00:14:44.064 "max_latency_us": 33363.781818181815 00:14:44.064 } 00:14:44.064 ], 00:14:44.064 "core_count": 1 00:14:44.064 } 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85037 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85037 ']' 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85037 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.064 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85037 00:14:44.064 killing process with pid 85037 00:14:44.064 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.064 00:14:44.064 Latency(us) 00:14:44.064 [2024-12-16T14:30:36.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.064 [2024-12-16T14:30:36.264Z] =================================================================================================================== 00:14:44.064 [2024-12-16T14:30:36.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85037' 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85037 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85037 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txNhJ3MWbg 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txNhJ3MWbg 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:44.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txNhJ3MWbg 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.txNhJ3MWbg 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85164 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85164 /var/tmp/bdevperf.sock 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85164 ']' 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.064 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.064 [2024-12-16 14:30:36.195246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:44.064 [2024-12-16 14:30:36.195589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85164 ] 00:14:44.322 [2024-12-16 14:30:36.337332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.322 [2024-12-16 14:30:36.358694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.322 [2024-12-16 14:30:36.388756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.322 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.322 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:44.322 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txNhJ3MWbg 00:14:44.580 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:44.839 [2024-12-16 14:30:36.986186] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.839 [2024-12-16 14:30:36.991446] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:44.839 [2024-12-16 14:30:36.992041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1ead0 (107): Transport endpoint is not connected 00:14:44.839 [2024-12-16 14:30:36.993029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1ead0 (9): Bad file descriptor 00:14:44.839 [2024-12-16 14:30:36.994024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:44.839 [2024-12-16 14:30:36.994172] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:44.839 [2024-12-16 14:30:36.994191] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:44.839 [2024-12-16 14:30:36.994209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:44.839 request: 00:14:44.839 { 00:14:44.839 "name": "TLSTEST", 00:14:44.839 "trtype": "tcp", 00:14:44.839 "traddr": "10.0.0.3", 00:14:44.839 "adrfam": "ipv4", 00:14:44.839 "trsvcid": "4420", 00:14:44.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.839 "prchk_reftag": false, 00:14:44.839 "prchk_guard": false, 00:14:44.839 "hdgst": false, 00:14:44.839 "ddgst": false, 00:14:44.839 "psk": "key0", 00:14:44.839 "allow_unrecognized_csi": false, 00:14:44.839 "method": "bdev_nvme_attach_controller", 00:14:44.839 "req_id": 1 00:14:44.839 } 00:14:44.839 Got JSON-RPC error response 00:14:44.839 response: 00:14:44.839 { 00:14:44.839 "code": -5, 00:14:44.839 "message": "Input/output error" 00:14:44.839 } 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85164 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85164 ']' 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85164 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.839 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85164 00:14:45.097 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:45.097 killing process with pid 85164 00:14:45.097 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.097 00:14:45.097 Latency(us) 00:14:45.097 [2024-12-16T14:30:37.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.098 [2024-12-16T14:30:37.298Z] =================================================================================================================== 00:14:45.098 [2024-12-16T14:30:37.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85164' 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85164 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85164 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D9ylwxeOk7 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D9ylwxeOk7 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.D9ylwxeOk7 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D9ylwxeOk7 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85185 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85185 /var/tmp/bdevperf.sock 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85185 ']' 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.098 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.098 [2024-12-16 14:30:37.225193] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:45.098 [2024-12-16 14:30:37.225469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85185 ] 00:14:45.356 [2024-12-16 14:30:37.376137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.357 [2024-12-16 14:30:37.397695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.357 [2024-12-16 14:30:37.427950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.357 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.357 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.357 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D9ylwxeOk7 00:14:45.616 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:45.875 [2024-12-16 14:30:38.057390] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.875 [2024-12-16 14:30:38.062557] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:45.875 [2024-12-16 14:30:38.062787] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:45.875 [2024-12-16 14:30:38.062981] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:45.875 [2024-12-16 14:30:38.063282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf54ad0 (107): Transport endpoint is not connected 00:14:45.875 [2024-12-16 14:30:38.064267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf54ad0 (9): Bad file descriptor 00:14:45.875 [2024-12-16 14:30:38.065263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:45.875 [2024-12-16 14:30:38.065445] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:45.875 [2024-12-16 14:30:38.065668] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:45.875 [2024-12-16 14:30:38.065697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:45.875 request: 00:14:45.875 { 00:14:45.875 "name": "TLSTEST", 00:14:45.875 "trtype": "tcp", 00:14:45.875 "traddr": "10.0.0.3", 00:14:45.875 "adrfam": "ipv4", 00:14:45.875 "trsvcid": "4420", 00:14:45.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:45.875 "prchk_reftag": false, 00:14:45.875 "prchk_guard": false, 00:14:45.875 "hdgst": false, 00:14:45.875 "ddgst": false, 00:14:45.875 "psk": "key0", 00:14:45.875 "allow_unrecognized_csi": false, 00:14:45.875 "method": "bdev_nvme_attach_controller", 00:14:45.875 "req_id": 1 00:14:45.875 } 00:14:45.875 Got JSON-RPC error response 00:14:45.875 response: 00:14:45.875 { 00:14:45.875 "code": -5, 00:14:45.875 "message": "Input/output error" 00:14:45.875 } 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85185 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85185 ']' 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85185 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85185 00:14:46.133 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85185' 00:14:46.134 killing process with pid 85185 00:14:46.134 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.134 00:14:46.134 Latency(us) 00:14:46.134 [2024-12-16T14:30:38.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.134 [2024-12-16T14:30:38.334Z] =================================================================================================================== 00:14:46.134 [2024-12-16T14:30:38.334Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85185 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85185 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D9ylwxeOk7 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D9ylwxeOk7 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.D9ylwxeOk7 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D9ylwxeOk7 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85209 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85209 /var/tmp/bdevperf.sock 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85209 ']' 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.134 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.134 [2024-12-16 14:30:38.293456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:46.134 [2024-12-16 14:30:38.293555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85209 ] 00:14:46.392 [2024-12-16 14:30:38.438181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.392 [2024-12-16 14:30:38.460045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.392 [2024-12-16 14:30:38.490588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.392 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.392 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.392 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D9ylwxeOk7 00:14:46.958 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.958 [2024-12-16 14:30:39.116174] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.958 [2024-12-16 14:30:39.121321] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:46.958 [2024-12-16 14:30:39.121519] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:46.958 [2024-12-16 14:30:39.121579] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:46.958 [2024-12-16 14:30:39.122058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x870ad0 (107): Transport endpoint is not connected 00:14:46.958 [2024-12-16 14:30:39.123040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x870ad0 (9): Bad file descriptor 00:14:46.958 [2024-12-16 14:30:39.124036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:46.958 [2024-12-16 14:30:39.124065] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:46.958 [2024-12-16 14:30:39.124077] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:46.958 [2024-12-16 14:30:39.124093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:46.958 request: 00:14:46.958 { 00:14:46.958 "name": "TLSTEST", 00:14:46.958 "trtype": "tcp", 00:14:46.958 "traddr": "10.0.0.3", 00:14:46.958 "adrfam": "ipv4", 00:14:46.958 "trsvcid": "4420", 00:14:46.958 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:46.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.958 "prchk_reftag": false, 00:14:46.958 "prchk_guard": false, 00:14:46.958 "hdgst": false, 00:14:46.958 "ddgst": false, 00:14:46.958 "psk": "key0", 00:14:46.958 "allow_unrecognized_csi": false, 00:14:46.958 "method": "bdev_nvme_attach_controller", 00:14:46.958 "req_id": 1 00:14:46.958 } 00:14:46.958 Got JSON-RPC error response 00:14:46.958 response: 00:14:46.958 { 00:14:46.958 "code": -5, 00:14:46.958 "message": "Input/output error" 00:14:46.959 } 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85209 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85209 ']' 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85209 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.959 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85209 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85209' 00:14:47.219 killing process with pid 85209 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85209 00:14:47.219 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.219 00:14:47.219 Latency(us) 00:14:47.219 [2024-12-16T14:30:39.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.219 [2024-12-16T14:30:39.419Z] =================================================================================================================== 00:14:47.219 [2024-12-16T14:30:39.419Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85209 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.219 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85230 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85230 /var/tmp/bdevperf.sock 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85230 ']' 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.220 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.220 [2024-12-16 14:30:39.360829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:47.220 [2024-12-16 14:30:39.361102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85230 ] 00:14:47.479 [2024-12-16 14:30:39.512703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.479 [2024-12-16 14:30:39.541885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.479 [2024-12-16 14:30:39.575503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.479 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.479 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:47.479 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:47.737 [2024-12-16 14:30:39.894865] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:47.738 [2024-12-16 14:30:39.894910] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:47.738 request: 00:14:47.738 { 00:14:47.738 "name": "key0", 00:14:47.738 "path": "", 00:14:47.738 "method": "keyring_file_add_key", 00:14:47.738 "req_id": 1 00:14:47.738 } 00:14:47.738 Got JSON-RPC error response 00:14:47.738 response: 00:14:47.738 { 00:14:47.738 "code": -1, 00:14:47.738 "message": "Operation not permitted" 00:14:47.738 } 00:14:47.738 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.996 [2024-12-16 14:30:40.159039] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.996 [2024-12-16 14:30:40.159359] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:47.996 request: 00:14:47.996 { 00:14:47.996 "name": "TLSTEST", 00:14:47.996 "trtype": "tcp", 00:14:47.996 "traddr": "10.0.0.3", 00:14:47.996 "adrfam": "ipv4", 00:14:47.996 "trsvcid": "4420", 00:14:47.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.996 "prchk_reftag": false, 00:14:47.996 "prchk_guard": false, 00:14:47.996 "hdgst": false, 00:14:47.996 "ddgst": false, 00:14:47.996 "psk": "key0", 00:14:47.996 "allow_unrecognized_csi": false, 00:14:47.996 "method": "bdev_nvme_attach_controller", 00:14:47.996 "req_id": 1 00:14:47.996 } 00:14:47.996 Got JSON-RPC error response 00:14:47.996 response: 00:14:47.996 { 00:14:47.996 "code": -126, 00:14:47.996 "message": "Required key not available" 00:14:47.996 } 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85230 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85230 ']' 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85230 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.996 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85230 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85230' 00:14:48.255 killing process with pid 85230 00:14:48.255 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.255 00:14:48.255 Latency(us) 00:14:48.255 [2024-12-16T14:30:40.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.255 [2024-12-16T14:30:40.455Z] =================================================================================================================== 00:14:48.255 [2024-12-16T14:30:40.455Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85230 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85230 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84806 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84806 ']' 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84806 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84806 00:14:48.255 killing process with pid 84806 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84806' 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84806 00:14:48.255 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84806 00:14:48.513 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:48.513 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:48.513 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:48.513 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zEXsJxKXHj 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zEXsJxKXHj 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85262 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85262 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85262 ']' 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.514 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.514 [2024-12-16 14:30:40.628132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:48.514 [2024-12-16 14:30:40.628242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.773 [2024-12-16 14:30:40.780570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.773 [2024-12-16 14:30:40.803336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.773 [2024-12-16 14:30:40.803661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.773 [2024-12-16 14:30:40.803688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.773 [2024-12-16 14:30:40.803698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.773 [2024-12-16 14:30:40.803709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.773 [2024-12-16 14:30:40.804060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.773 [2024-12-16 14:30:40.837357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zEXsJxKXHj 00:14:49.708 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:49.967 [2024-12-16 14:30:41.916841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.967 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:50.225 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:50.484 [2024-12-16 14:30:42.480964] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.484 [2024-12-16 14:30:42.481404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.484 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:50.743 malloc0 00:14:50.743 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:51.001 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:14:51.260 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zEXsJxKXHj 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zEXsJxKXHj 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85323 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85323 /var/tmp/bdevperf.sock 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85323 ']' 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.519 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.519 [2024-12-16 14:30:43.579091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:14:51.519 [2024-12-16 14:30:43.579324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85323 ] 00:14:51.778 [2024-12-16 14:30:43.727653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.778 [2024-12-16 14:30:43.752709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.778 [2024-12-16 14:30:43.787533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.778 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.778 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:51.778 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:14:52.037 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.296 [2024-12-16 14:30:44.351395] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.296 TLSTESTn1 00:14:52.296 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:52.555 Running I/O for 10 seconds... 00:14:54.425 4027.00 IOPS, 15.73 MiB/s [2024-12-16T14:30:47.561Z] 4084.00 IOPS, 15.95 MiB/s [2024-12-16T14:30:48.937Z] 4109.00 IOPS, 16.05 MiB/s [2024-12-16T14:30:49.873Z] 4131.75 IOPS, 16.14 MiB/s [2024-12-16T14:30:50.809Z] 4166.00 IOPS, 16.27 MiB/s [2024-12-16T14:30:51.746Z] 4124.33 IOPS, 16.11 MiB/s [2024-12-16T14:30:52.691Z] 4050.29 IOPS, 15.82 MiB/s [2024-12-16T14:30:53.629Z] 4072.75 IOPS, 15.91 MiB/s [2024-12-16T14:30:54.566Z] 4071.11 IOPS, 15.90 MiB/s [2024-12-16T14:30:54.825Z] 4090.30 IOPS, 15.98 MiB/s 00:15:02.625 Latency(us) 00:15:02.625 [2024-12-16T14:30:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:02.626 Verification LBA range: start 0x0 length 0x2000 00:15:02.626 TLSTESTn1 : 10.02 4093.61 15.99 0.00 0.00 31199.93 8340.95 23592.96 00:15:02.626 [2024-12-16T14:30:54.826Z] =================================================================================================================== 00:15:02.626 [2024-12-16T14:30:54.826Z] Total : 4093.61 15.99 0.00 0.00 31199.93 8340.95 23592.96 00:15:02.626 { 00:15:02.626 "results": [ 00:15:02.626 { 00:15:02.626 "job": "TLSTESTn1", 00:15:02.626 "core_mask": "0x4", 00:15:02.626 "workload": "verify", 00:15:02.626 "status": "finished", 00:15:02.626 "verify_range": { 00:15:02.626 "start": 0, 00:15:02.626 "length": 8192 00:15:02.626 }, 00:15:02.626 "queue_depth": 128, 00:15:02.626 "io_size": 4096, 00:15:02.626 "runtime": 10.022688, 00:15:02.626 "iops": 4093.612412159293, 00:15:02.626 "mibps": 15.990673484997238, 00:15:02.626 "io_failed": 0, 00:15:02.626 "io_timeout": 0, 00:15:02.626 "avg_latency_us": 31199.92516867227, 00:15:02.626 "min_latency_us": 8340.945454545454, 00:15:02.626 "max_latency_us": 23592.96 00:15:02.626 } 00:15:02.626 ], 00:15:02.626 "core_count": 1 00:15:02.626 } 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 85323 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85323 ']' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85323 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85323 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:02.626 killing process with pid 85323 00:15:02.626 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.626 00:15:02.626 Latency(us) 00:15:02.626 [2024-12-16T14:30:54.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.626 [2024-12-16T14:30:54.826Z] =================================================================================================================== 00:15:02.626 [2024-12-16T14:30:54.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85323' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85323 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85323 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zEXsJxKXHj 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zEXsJxKXHj 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zEXsJxKXHj 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:02.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zEXsJxKXHj 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zEXsJxKXHj 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85452 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85452 /var/tmp/bdevperf.sock 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85452 ']' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.626 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.626 [2024-12-16 14:30:54.805396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:02.626 [2024-12-16 14:30:54.805677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85452 ] 00:15:02.886 [2024-12-16 14:30:54.949944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.886 [2024-12-16 14:30:54.973109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.886 [2024-12-16 14:30:55.006028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.886 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.886 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:02.886 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:03.452 [2024-12-16 14:30:55.400409] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zEXsJxKXHj': 0100666 00:15:03.452 [2024-12-16 14:30:55.400711] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:03.452 request: 00:15:03.452 { 00:15:03.452 "name": "key0", 00:15:03.452 "path": "/tmp/tmp.zEXsJxKXHj", 00:15:03.452 "method": "keyring_file_add_key", 00:15:03.452 "req_id": 1 00:15:03.452 } 00:15:03.452 Got JSON-RPC error response 00:15:03.452 response: 00:15:03.452 { 00:15:03.452 "code": -1, 00:15:03.452 "message": "Operation not permitted" 00:15:03.452 } 00:15:03.453 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:03.712 [2024-12-16 14:30:55.696576] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.712 [2024-12-16 14:30:55.696902] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:03.712 request: 00:15:03.712 { 00:15:03.712 "name": "TLSTEST", 00:15:03.712 "trtype": "tcp", 00:15:03.712 "traddr": "10.0.0.3", 00:15:03.712 "adrfam": "ipv4", 00:15:03.712 "trsvcid": "4420", 00:15:03.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.712 "prchk_reftag": false, 00:15:03.712 "prchk_guard": false, 00:15:03.712 "hdgst": false, 00:15:03.712 "ddgst": false, 00:15:03.712 "psk": "key0", 00:15:03.712 "allow_unrecognized_csi": false, 00:15:03.712 "method": "bdev_nvme_attach_controller", 00:15:03.712 "req_id": 1 00:15:03.712 } 00:15:03.712 Got JSON-RPC error response 00:15:03.712 response: 00:15:03.712 { 00:15:03.712 "code": -126, 00:15:03.712 "message": "Required key not available" 00:15:03.712 } 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 85452 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85452 ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85452 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85452 00:15:03.712 killing process with pid 85452 00:15:03.712 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.712 00:15:03.712 Latency(us) 00:15:03.712 [2024-12-16T14:30:55.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.712 [2024-12-16T14:30:55.912Z] =================================================================================================================== 00:15:03.712 [2024-12-16T14:30:55.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85452' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85452 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85452 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 85262 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85262 ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85262 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85262 00:15:03.712 killing process with pid 85262 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85262' 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85262 00:15:03.712 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85262 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85478 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85478 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85478 ']' 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.971 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.971 [2024-12-16 14:30:56.102393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:03.971 [2024-12-16 14:30:56.102507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.230 [2024-12-16 14:30:56.245979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.230 [2024-12-16 14:30:56.267914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.230 [2024-12-16 14:30:56.267992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.230 [2024-12-16 14:30:56.268030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.230 [2024-12-16 14:30:56.268043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.230 [2024-12-16 14:30:56.268054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.230 [2024-12-16 14:30:56.268388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.230 [2024-12-16 14:30:56.304258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zEXsJxKXHj 00:15:05.168 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:05.427 [2024-12-16 14:30:57.404267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.427 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:05.686 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:05.945 [2024-12-16 14:30:57.936379] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.945 [2024-12-16 14:30:57.936699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.945 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:06.204 malloc0 00:15:06.204 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.463 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:06.722 [2024-12-16 14:30:58.722508] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zEXsJxKXHj': 0100666 00:15:06.722 [2024-12-16 14:30:58.722563] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:06.722 request: 00:15:06.722 { 00:15:06.722 "name": "key0", 00:15:06.722 "path": "/tmp/tmp.zEXsJxKXHj", 00:15:06.722 "method": "keyring_file_add_key", 00:15:06.722 "req_id": 1 00:15:06.722 } 00:15:06.722 Got JSON-RPC error response 00:15:06.722 response: 00:15:06.722 { 00:15:06.722 "code": -1, 00:15:06.722 "message": "Operation not permitted" 00:15:06.722 } 00:15:06.722 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.981 [2024-12-16 14:30:58.974676] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:06.981 [2024-12-16 14:30:58.974998] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:06.981 request: 00:15:06.981 { 00:15:06.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.981 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.981 "psk": "key0", 00:15:06.981 "method": "nvmf_subsystem_add_host", 00:15:06.981 "req_id": 1 00:15:06.981 } 00:15:06.982 Got JSON-RPC error response 00:15:06.982 response: 00:15:06.982 { 00:15:06.982 "code": -32603, 00:15:06.982 "message": "Internal error" 00:15:06.982 } 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 85478 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85478 ']' 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85478 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:06.982 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85478 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85478' 00:15:06.982 killing process with pid 85478 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85478 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85478 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zEXsJxKXHj 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85546 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85546 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85546 ']' 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.982 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.241 [2024-12-16 14:30:59.226000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:07.241 [2024-12-16 14:30:59.226096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.241 [2024-12-16 14:30:59.377607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.241 [2024-12-16 14:30:59.400492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.241 [2024-12-16 14:30:59.400564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.241 [2024-12-16 14:30:59.400588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.241 [2024-12-16 14:30:59.400597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.241 [2024-12-16 14:30:59.400606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.241 [2024-12-16 14:30:59.400992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.241 [2024-12-16 14:30:59.433932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zEXsJxKXHj 00:15:07.500 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:07.759 [2024-12-16 14:30:59.804369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.759 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:08.019 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:08.278 [2024-12-16 14:31:00.340515] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.278 [2024-12-16 14:31:00.340725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.278 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:08.537 malloc0 00:15:08.537 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:08.796 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:09.054 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=85590 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 85590 /var/tmp/bdevperf.sock 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85590 ']' 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.313 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.313 [2024-12-16 14:31:01.423721] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:09.313 [2024-12-16 14:31:01.423845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85590 ] 00:15:09.572 [2024-12-16 14:31:01.570494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.572 [2024-12-16 14:31:01.591738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.572 [2024-12-16 14:31:01.621056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.572 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.572 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:09.572 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:09.831 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.090 [2024-12-16 14:31:02.133733] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.090 TLSTESTn1 00:15:10.090 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:10.349 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:10.349 "subsystems": [ 00:15:10.349 { 00:15:10.349 "subsystem": "keyring", 00:15:10.349 "config": [ 00:15:10.349 { 00:15:10.349 "method": "keyring_file_add_key", 00:15:10.349 "params": { 00:15:10.349 "name": "key0", 00:15:10.349 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:10.349 } 00:15:10.349 } 00:15:10.349 ] 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "subsystem": "iobuf", 00:15:10.349 "config": [ 00:15:10.349 { 00:15:10.349 "method": "iobuf_set_options", 00:15:10.349 "params": { 00:15:10.349 "small_pool_count": 8192, 00:15:10.349 "large_pool_count": 1024, 00:15:10.349 "small_bufsize": 8192, 00:15:10.349 "large_bufsize": 135168, 00:15:10.349 "enable_numa": false 00:15:10.349 } 00:15:10.349 } 00:15:10.349 ] 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "subsystem": "sock", 00:15:10.349 "config": [ 00:15:10.349 { 00:15:10.349 "method": "sock_set_default_impl", 00:15:10.349 "params": { 00:15:10.349 "impl_name": "uring" 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "sock_impl_set_options", 00:15:10.349 "params": { 00:15:10.349 "impl_name": "ssl", 00:15:10.349 "recv_buf_size": 4096, 00:15:10.349 "send_buf_size": 4096, 00:15:10.349 "enable_recv_pipe": true, 00:15:10.349 "enable_quickack": false, 00:15:10.349 "enable_placement_id": 0, 00:15:10.349 "enable_zerocopy_send_server": true, 00:15:10.349 "enable_zerocopy_send_client": false, 00:15:10.349 "zerocopy_threshold": 0, 00:15:10.349 "tls_version": 0, 00:15:10.349 "enable_ktls": false 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "sock_impl_set_options", 00:15:10.349 "params": { 00:15:10.349 "impl_name": "posix", 00:15:10.349 "recv_buf_size": 2097152, 00:15:10.349 "send_buf_size": 2097152, 00:15:10.349 "enable_recv_pipe": true, 00:15:10.349 "enable_quickack": false, 00:15:10.349 "enable_placement_id": 0, 00:15:10.349 "enable_zerocopy_send_server": true, 00:15:10.349 "enable_zerocopy_send_client": false, 00:15:10.349 "zerocopy_threshold": 0, 00:15:10.349 "tls_version": 0, 00:15:10.349 "enable_ktls": false 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "sock_impl_set_options", 00:15:10.349 "params": { 00:15:10.349 "impl_name": "uring", 00:15:10.349 "recv_buf_size": 2097152, 00:15:10.349 "send_buf_size": 2097152, 00:15:10.349 "enable_recv_pipe": true, 00:15:10.349 "enable_quickack": false, 00:15:10.349 "enable_placement_id": 0, 00:15:10.349 "enable_zerocopy_send_server": false, 00:15:10.349 "enable_zerocopy_send_client": false, 00:15:10.349 "zerocopy_threshold": 0, 00:15:10.349 "tls_version": 0, 00:15:10.349 "enable_ktls": false 00:15:10.349 } 00:15:10.349 } 00:15:10.349 ] 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "subsystem": "vmd", 00:15:10.349 "config": [] 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "subsystem": "accel", 00:15:10.349 "config": [ 00:15:10.349 { 00:15:10.349 "method": "accel_set_options", 00:15:10.349 "params": { 00:15:10.349 "small_cache_size": 128, 00:15:10.349 "large_cache_size": 16, 00:15:10.349 "task_count": 2048, 00:15:10.349 "sequence_count": 2048, 00:15:10.349 "buf_count": 2048 00:15:10.349 } 00:15:10.349 } 00:15:10.349 ] 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "subsystem": "bdev", 00:15:10.349 "config": [ 00:15:10.349 { 00:15:10.349 "method": "bdev_set_options", 00:15:10.349 "params": { 00:15:10.349 "bdev_io_pool_size": 65535, 00:15:10.349 "bdev_io_cache_size": 256, 00:15:10.349 "bdev_auto_examine": true, 00:15:10.349 "iobuf_small_cache_size": 128, 00:15:10.349 "iobuf_large_cache_size": 16 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "bdev_raid_set_options", 00:15:10.349 "params": { 00:15:10.349 "process_window_size_kb": 1024, 00:15:10.349 "process_max_bandwidth_mb_sec": 0 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "bdev_iscsi_set_options", 00:15:10.349 "params": { 00:15:10.349 "timeout_sec": 30 00:15:10.349 } 00:15:10.349 }, 00:15:10.349 { 00:15:10.349 "method": "bdev_nvme_set_options", 00:15:10.349 "params": { 00:15:10.349 "action_on_timeout": "none", 00:15:10.349 "timeout_us": 0, 00:15:10.349 "timeout_admin_us": 0, 00:15:10.349 "keep_alive_timeout_ms": 10000, 00:15:10.349 "arbitration_burst": 0, 00:15:10.349 "low_priority_weight": 0, 00:15:10.349 "medium_priority_weight": 0, 00:15:10.349 "high_priority_weight": 0, 00:15:10.349 "nvme_adminq_poll_period_us": 10000, 00:15:10.349 "nvme_ioq_poll_period_us": 0, 00:15:10.349 "io_queue_requests": 0, 00:15:10.349 "delay_cmd_submit": true, 00:15:10.350 "transport_retry_count": 4, 00:15:10.350 "bdev_retry_count": 3, 00:15:10.350 "transport_ack_timeout": 0, 00:15:10.350 "ctrlr_loss_timeout_sec": 0, 00:15:10.350 "reconnect_delay_sec": 0, 00:15:10.350 "fast_io_fail_timeout_sec": 0, 00:15:10.350 "disable_auto_failback": false, 00:15:10.350 "generate_uuids": false, 00:15:10.350 "transport_tos": 0, 00:15:10.350 "nvme_error_stat": false, 00:15:10.350 "rdma_srq_size": 0, 00:15:10.350 "io_path_stat": false, 00:15:10.350 "allow_accel_sequence": false, 00:15:10.350 "rdma_max_cq_size": 0, 00:15:10.350 "rdma_cm_event_timeout_ms": 0, 00:15:10.350 "dhchap_digests": [ 00:15:10.350 "sha256", 00:15:10.350 "sha384", 00:15:10.350 "sha512" 00:15:10.350 ], 00:15:10.350 "dhchap_dhgroups": [ 00:15:10.350 "null", 00:15:10.350 "ffdhe2048", 00:15:10.350 "ffdhe3072", 00:15:10.350 "ffdhe4096", 00:15:10.350 "ffdhe6144", 00:15:10.350 "ffdhe8192" 00:15:10.350 ], 00:15:10.350 "rdma_umr_per_io": false 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "bdev_nvme_set_hotplug", 00:15:10.350 "params": { 00:15:10.350 "period_us": 100000, 00:15:10.350 "enable": false 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "bdev_malloc_create", 00:15:10.350 "params": { 00:15:10.350 "name": "malloc0", 00:15:10.350 "num_blocks": 8192, 00:15:10.350 "block_size": 4096, 00:15:10.350 "physical_block_size": 4096, 00:15:10.350 "uuid": "333c0779-2b95-497e-8b0a-4aa4cf4a0e86", 00:15:10.350 "optimal_io_boundary": 0, 00:15:10.350 "md_size": 0, 00:15:10.350 "dif_type": 0, 00:15:10.350 "dif_is_head_of_md": false, 00:15:10.350 "dif_pi_format": 0 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "bdev_wait_for_examine" 00:15:10.350 } 00:15:10.350 ] 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "subsystem": "nbd", 00:15:10.350 "config": [] 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "subsystem": "scheduler", 00:15:10.350 "config": [ 00:15:10.350 { 00:15:10.350 "method": "framework_set_scheduler", 00:15:10.350 "params": { 00:15:10.350 "name": "static" 00:15:10.350 } 00:15:10.350 } 00:15:10.350 ] 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "subsystem": "nvmf", 00:15:10.350 "config": [ 00:15:10.350 { 00:15:10.350 "method": "nvmf_set_config", 00:15:10.350 "params": { 00:15:10.350 "discovery_filter": "match_any", 00:15:10.350 "admin_cmd_passthru": { 00:15:10.350 "identify_ctrlr": false 00:15:10.350 }, 00:15:10.350 "dhchap_digests": [ 00:15:10.350 "sha256", 00:15:10.350 "sha384", 00:15:10.350 "sha512" 00:15:10.350 ], 00:15:10.350 "dhchap_dhgroups": [ 00:15:10.350 "null", 00:15:10.350 "ffdhe2048", 00:15:10.350 "ffdhe3072", 00:15:10.350 "ffdhe4096", 00:15:10.350 "ffdhe6144", 00:15:10.350 "ffdhe8192" 00:15:10.350 ] 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_set_max_subsystems", 00:15:10.350 "params": { 00:15:10.350 "max_subsystems": 1024 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_set_crdt", 00:15:10.350 "params": { 00:15:10.350 "crdt1": 0, 00:15:10.350 "crdt2": 0, 00:15:10.350 "crdt3": 0 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_create_transport", 00:15:10.350 "params": { 00:15:10.350 "trtype": "TCP", 00:15:10.350 "max_queue_depth": 128, 00:15:10.350 "max_io_qpairs_per_ctrlr": 127, 00:15:10.350 "in_capsule_data_size": 4096, 00:15:10.350 "max_io_size": 131072, 00:15:10.350 "io_unit_size": 131072, 00:15:10.350 "max_aq_depth": 128, 00:15:10.350 "num_shared_buffers": 511, 00:15:10.350 "buf_cache_size": 4294967295, 00:15:10.350 "dif_insert_or_strip": false, 00:15:10.350 "zcopy": false, 00:15:10.350 "c2h_success": false, 00:15:10.350 "sock_priority": 0, 00:15:10.350 "abort_timeout_sec": 1, 00:15:10.350 "ack_timeout": 0, 00:15:10.350 "data_wr_pool_size": 0 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_create_subsystem", 00:15:10.350 "params": { 00:15:10.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.350 "allow_any_host": false, 00:15:10.350 "serial_number": "SPDK00000000000001", 00:15:10.350 "model_number": "SPDK bdev Controller", 00:15:10.350 "max_namespaces": 10, 00:15:10.350 "min_cntlid": 1, 00:15:10.350 "max_cntlid": 65519, 00:15:10.350 "ana_reporting": false 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_subsystem_add_host", 00:15:10.350 "params": { 00:15:10.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.350 "host": "nqn.2016-06.io.spdk:host1", 00:15:10.350 "psk": "key0" 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_subsystem_add_ns", 00:15:10.350 "params": { 00:15:10.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.350 "namespace": { 00:15:10.350 "nsid": 1, 00:15:10.350 "bdev_name": "malloc0", 00:15:10.350 "nguid": "333C07792B95497E8B0A4AA4CF4A0E86", 00:15:10.350 "uuid": "333c0779-2b95-497e-8b0a-4aa4cf4a0e86", 00:15:10.350 "no_auto_visible": false 00:15:10.350 } 00:15:10.350 } 00:15:10.350 }, 00:15:10.350 { 00:15:10.350 "method": "nvmf_subsystem_add_listener", 00:15:10.350 "params": { 00:15:10.350 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.350 "listen_address": { 00:15:10.350 "trtype": "TCP", 00:15:10.350 "adrfam": "IPv4", 00:15:10.350 "traddr": "10.0.0.3", 00:15:10.350 "trsvcid": "4420" 00:15:10.350 }, 00:15:10.350 "secure_channel": true 00:15:10.350 } 00:15:10.350 } 00:15:10.350 ] 00:15:10.350 } 00:15:10.350 ] 00:15:10.350 }' 00:15:10.350 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:10.918 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:10.918 "subsystems": [ 00:15:10.918 { 00:15:10.918 "subsystem": "keyring", 00:15:10.918 "config": [ 00:15:10.918 { 00:15:10.918 "method": "keyring_file_add_key", 00:15:10.918 "params": { 00:15:10.919 "name": "key0", 00:15:10.919 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:10.919 } 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "iobuf", 00:15:10.919 "config": [ 00:15:10.919 { 00:15:10.919 "method": "iobuf_set_options", 00:15:10.919 "params": { 00:15:10.919 "small_pool_count": 8192, 00:15:10.919 "large_pool_count": 1024, 00:15:10.919 "small_bufsize": 8192, 00:15:10.919 "large_bufsize": 135168, 00:15:10.919 "enable_numa": false 00:15:10.919 } 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "sock", 00:15:10.919 "config": [ 00:15:10.919 { 00:15:10.919 "method": "sock_set_default_impl", 00:15:10.919 "params": { 00:15:10.919 "impl_name": "uring" 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "sock_impl_set_options", 00:15:10.919 "params": { 00:15:10.919 "impl_name": "ssl", 00:15:10.919 "recv_buf_size": 4096, 00:15:10.919 "send_buf_size": 4096, 00:15:10.919 "enable_recv_pipe": true, 00:15:10.919 "enable_quickack": false, 00:15:10.919 "enable_placement_id": 0, 00:15:10.919 "enable_zerocopy_send_server": true, 00:15:10.919 "enable_zerocopy_send_client": false, 00:15:10.919 "zerocopy_threshold": 0, 00:15:10.919 "tls_version": 0, 00:15:10.919 "enable_ktls": false 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "sock_impl_set_options", 00:15:10.919 "params": { 00:15:10.919 "impl_name": "posix", 00:15:10.919 "recv_buf_size": 2097152, 00:15:10.919 "send_buf_size": 2097152, 00:15:10.919 "enable_recv_pipe": true, 00:15:10.919 "enable_quickack": false, 00:15:10.919 "enable_placement_id": 0, 00:15:10.919 "enable_zerocopy_send_server": true, 00:15:10.919 "enable_zerocopy_send_client": false, 00:15:10.919 "zerocopy_threshold": 0, 00:15:10.919 "tls_version": 0, 00:15:10.919 "enable_ktls": false 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "sock_impl_set_options", 00:15:10.919 "params": { 00:15:10.919 "impl_name": "uring", 00:15:10.919 "recv_buf_size": 2097152, 00:15:10.919 "send_buf_size": 2097152, 00:15:10.919 "enable_recv_pipe": true, 00:15:10.919 "enable_quickack": false, 00:15:10.919 "enable_placement_id": 0, 00:15:10.919 "enable_zerocopy_send_server": false, 00:15:10.919 "enable_zerocopy_send_client": false, 00:15:10.919 "zerocopy_threshold": 0, 00:15:10.919 "tls_version": 0, 00:15:10.919 "enable_ktls": false 00:15:10.919 } 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "vmd", 00:15:10.919 "config": [] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "accel", 00:15:10.919 "config": [ 00:15:10.919 { 00:15:10.919 "method": "accel_set_options", 00:15:10.919 "params": { 00:15:10.919 "small_cache_size": 128, 00:15:10.919 "large_cache_size": 16, 00:15:10.919 "task_count": 2048, 00:15:10.919 "sequence_count": 2048, 00:15:10.919 "buf_count": 2048 00:15:10.919 } 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "bdev", 00:15:10.919 "config": [ 00:15:10.919 { 00:15:10.919 "method": "bdev_set_options", 00:15:10.919 "params": { 00:15:10.919 "bdev_io_pool_size": 65535, 00:15:10.919 "bdev_io_cache_size": 256, 00:15:10.919 "bdev_auto_examine": true, 00:15:10.919 "iobuf_small_cache_size": 128, 00:15:10.919 "iobuf_large_cache_size": 16 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_raid_set_options", 00:15:10.919 "params": { 00:15:10.919 "process_window_size_kb": 1024, 00:15:10.919 "process_max_bandwidth_mb_sec": 0 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_iscsi_set_options", 00:15:10.919 "params": { 00:15:10.919 "timeout_sec": 30 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_nvme_set_options", 00:15:10.919 "params": { 00:15:10.919 "action_on_timeout": "none", 00:15:10.919 "timeout_us": 0, 00:15:10.919 "timeout_admin_us": 0, 00:15:10.919 "keep_alive_timeout_ms": 10000, 00:15:10.919 "arbitration_burst": 0, 00:15:10.919 "low_priority_weight": 0, 00:15:10.919 "medium_priority_weight": 0, 00:15:10.919 "high_priority_weight": 0, 00:15:10.919 "nvme_adminq_poll_period_us": 10000, 00:15:10.919 "nvme_ioq_poll_period_us": 0, 00:15:10.919 "io_queue_requests": 512, 00:15:10.919 "delay_cmd_submit": true, 00:15:10.919 "transport_retry_count": 4, 00:15:10.919 "bdev_retry_count": 3, 00:15:10.919 "transport_ack_timeout": 0, 00:15:10.919 "ctrlr_loss_timeout_sec": 0, 00:15:10.919 "reconnect_delay_sec": 0, 00:15:10.919 "fast_io_fail_timeout_sec": 0, 00:15:10.919 "disable_auto_failback": false, 00:15:10.919 "generate_uuids": false, 00:15:10.919 "transport_tos": 0, 00:15:10.919 "nvme_error_stat": false, 00:15:10.919 "rdma_srq_size": 0, 00:15:10.919 "io_path_stat": false, 00:15:10.919 "allow_accel_sequence": false, 00:15:10.919 "rdma_max_cq_size": 0, 00:15:10.919 "rdma_cm_event_timeout_ms": 0, 00:15:10.919 "dhchap_digests": [ 00:15:10.919 "sha256", 00:15:10.919 "sha384", 00:15:10.919 "sha512" 00:15:10.919 ], 00:15:10.919 "dhchap_dhgroups": [ 00:15:10.919 "null", 00:15:10.919 "ffdhe2048", 00:15:10.919 "ffdhe3072", 00:15:10.919 "ffdhe4096", 00:15:10.919 "ffdhe6144", 00:15:10.919 "ffdhe8192" 00:15:10.919 ], 00:15:10.919 "rdma_umr_per_io": false 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_nvme_attach_controller", 00:15:10.919 "params": { 00:15:10.919 "name": "TLSTEST", 00:15:10.919 "trtype": "TCP", 00:15:10.919 "adrfam": "IPv4", 00:15:10.919 "traddr": "10.0.0.3", 00:15:10.919 "trsvcid": "4420", 00:15:10.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.919 "prchk_reftag": false, 00:15:10.919 "prchk_guard": false, 00:15:10.919 "ctrlr_loss_timeout_sec": 0, 00:15:10.919 "reconnect_delay_sec": 0, 00:15:10.919 "fast_io_fail_timeout_sec": 0, 00:15:10.919 "psk": "key0", 00:15:10.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.919 "hdgst": false, 00:15:10.919 "ddgst": false, 00:15:10.919 "multipath": "multipath" 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_nvme_set_hotplug", 00:15:10.919 "params": { 00:15:10.919 "period_us": 100000, 00:15:10.919 "enable": false 00:15:10.919 } 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "method": "bdev_wait_for_examine" 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }, 00:15:10.919 { 00:15:10.919 "subsystem": "nbd", 00:15:10.919 "config": [] 00:15:10.919 } 00:15:10.919 ] 00:15:10.919 }' 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 85590 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85590 ']' 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85590 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85590 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:10.919 killing process with pid 85590 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85590' 00:15:10.919 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.919 00:15:10.919 Latency(us) 00:15:10.919 [2024-12-16T14:31:03.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.919 [2024-12-16T14:31:03.119Z] =================================================================================================================== 00:15:10.919 [2024-12-16T14:31:03.119Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85590 00:15:10.919 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85590 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 85546 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85546 ']' 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85546 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85546 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:10.920 killing process with pid 85546 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85546' 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85546 00:15:10.920 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85546 00:15:11.190 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:11.190 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.190 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.190 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:11.190 "subsystems": [ 00:15:11.190 { 00:15:11.190 "subsystem": "keyring", 00:15:11.190 "config": [ 00:15:11.190 { 00:15:11.190 "method": "keyring_file_add_key", 00:15:11.190 "params": { 00:15:11.190 "name": "key0", 00:15:11.190 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:11.190 } 00:15:11.190 } 00:15:11.190 ] 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "subsystem": "iobuf", 00:15:11.190 "config": [ 00:15:11.190 { 00:15:11.190 "method": "iobuf_set_options", 00:15:11.190 "params": { 00:15:11.190 "small_pool_count": 8192, 00:15:11.190 "large_pool_count": 1024, 00:15:11.190 "small_bufsize": 8192, 00:15:11.190 "large_bufsize": 135168, 00:15:11.190 "enable_numa": false 00:15:11.190 } 00:15:11.190 } 00:15:11.190 ] 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "subsystem": "sock", 00:15:11.190 "config": [ 00:15:11.190 { 00:15:11.190 "method": "sock_set_default_impl", 00:15:11.190 "params": { 00:15:11.190 "impl_name": "uring" 00:15:11.190 } 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "method": "sock_impl_set_options", 00:15:11.190 "params": { 00:15:11.190 "impl_name": "ssl", 00:15:11.190 "recv_buf_size": 4096, 00:15:11.190 "send_buf_size": 4096, 00:15:11.190 "enable_recv_pipe": true, 00:15:11.190 "enable_quickack": false, 00:15:11.190 "enable_placement_id": 0, 00:15:11.190 "enable_zerocopy_send_server": true, 00:15:11.190 "enable_zerocopy_send_client": false, 00:15:11.190 "zerocopy_threshold": 0, 00:15:11.190 "tls_version": 0, 00:15:11.190 "enable_ktls": false 00:15:11.190 } 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "method": "sock_impl_set_options", 00:15:11.190 "params": { 00:15:11.190 "impl_name": "posix", 00:15:11.190 "recv_buf_size": 2097152, 00:15:11.190 "send_buf_size": 2097152, 00:15:11.190 "enable_recv_pipe": true, 00:15:11.190 "enable_quickack": false, 00:15:11.190 "enable_placement_id": 0, 00:15:11.190 "enable_zerocopy_send_server": true, 00:15:11.190 "enable_zerocopy_send_client": false, 00:15:11.190 "zerocopy_threshold": 0, 00:15:11.190 "tls_version": 0, 00:15:11.190 "enable_ktls": false 00:15:11.190 } 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "method": "sock_impl_set_options", 00:15:11.190 "params": { 00:15:11.190 "impl_name": "uring", 00:15:11.190 "recv_buf_size": 2097152, 00:15:11.190 "send_buf_size": 2097152, 00:15:11.190 "enable_recv_pipe": true, 00:15:11.190 "enable_quickack": false, 00:15:11.190 "enable_placement_id": 0, 00:15:11.190 "enable_zerocopy_send_server": false, 00:15:11.190 "enable_zerocopy_send_client": false, 00:15:11.190 "zerocopy_threshold": 0, 00:15:11.190 "tls_version": 0, 00:15:11.190 "enable_ktls": false 00:15:11.190 } 00:15:11.190 } 00:15:11.190 ] 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "subsystem": "vmd", 00:15:11.190 "config": [] 00:15:11.190 }, 00:15:11.190 { 00:15:11.190 "subsystem": "accel", 00:15:11.190 "config": [ 00:15:11.190 { 00:15:11.190 "method": "accel_set_options", 00:15:11.190 "params": { 00:15:11.190 "small_cache_size": 128, 00:15:11.190 "large_cache_size": 16, 00:15:11.190 "task_count": 2048, 00:15:11.190 "sequence_count": 2048, 00:15:11.190 "buf_count": 2048 00:15:11.190 } 00:15:11.190 } 00:15:11.191 ] 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "subsystem": "bdev", 00:15:11.191 "config": [ 00:15:11.191 { 00:15:11.191 "method": "bdev_set_options", 00:15:11.191 "params": { 00:15:11.191 "bdev_io_pool_size": 65535, 00:15:11.191 "bdev_io_cache_size": 256, 00:15:11.191 "bdev_auto_examine": true, 00:15:11.191 "iobuf_small_cache_size": 128, 00:15:11.191 "iobuf_large_cache_size": 16 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_raid_set_options", 00:15:11.191 "params": { 00:15:11.191 "process_window_size_kb": 1024, 00:15:11.191 "process_max_bandwidth_mb_sec": 0 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_iscsi_set_options", 00:15:11.191 "params": { 00:15:11.191 "timeout_sec": 30 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_nvme_set_options", 00:15:11.191 "params": { 00:15:11.191 "action_on_timeout": "none", 00:15:11.191 "timeout_us": 0, 00:15:11.191 "timeout_admin_us": 0, 00:15:11.191 "keep_alive_timeout_ms": 10000, 00:15:11.191 "arbitration_burst": 0, 00:15:11.191 "low_priority_weight": 0, 00:15:11.191 "medium_priority_weight": 0, 00:15:11.191 "high_priority_weight": 0, 00:15:11.191 "nvme_adminq_poll_period_us": 10000, 00:15:11.191 "nvme_ioq_poll_period_us": 0, 00:15:11.191 "io_queue_requests": 0, 00:15:11.191 "delay_cmd_submit": true, 00:15:11.191 "transport_retry_count": 4, 00:15:11.191 "bdev_retry_count": 3, 00:15:11.191 "transport_ack_timeout": 0, 00:15:11.191 "ctrlr_loss_timeout_sec": 0, 00:15:11.191 "reconnect_delay_sec": 0, 00:15:11.191 "fast_io_fail_timeout_sec": 0, 00:15:11.191 "disable_auto_failback": false, 00:15:11.191 "generate_uuids": false, 00:15:11.191 "transport_tos": 0, 00:15:11.191 "nvme_error_stat": false, 00:15:11.191 "rdma_srq_size": 0, 00:15:11.191 "io_path_stat": false, 00:15:11.191 "allow_accel_sequence": false, 00:15:11.191 "rdma_max_cq_size": 0, 00:15:11.191 "rdma_cm_event_timeout_ms": 0, 00:15:11.191 "dhchap_digests": [ 00:15:11.191 "sha256", 00:15:11.191 "sha384", 00:15:11.191 "sha512" 00:15:11.191 ], 00:15:11.191 "dhchap_dhgroups": [ 00:15:11.191 "null", 00:15:11.191 "ffdhe2048", 00:15:11.191 "ffdhe3072", 00:15:11.191 "ffdhe4096", 00:15:11.191 "ffdhe6144", 00:15:11.191 "ffdhe8192" 00:15:11.191 ], 00:15:11.191 "rdma_umr_per_io": false 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_nvme_set_hotplug", 00:15:11.191 "params": { 00:15:11.191 "period_us": 100000, 00:15:11.191 "enable": false 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_malloc_create", 00:15:11.191 "params": { 00:15:11.191 "name": "malloc0", 00:15:11.191 "num_blocks": 8192, 00:15:11.191 "block_size": 4096, 00:15:11.191 "physical_block_size": 4096, 00:15:11.191 "uuid": "333c0779-2b95-497e-8b0a-4aa4cf4a0e86", 00:15:11.191 "optimal_io_boundary": 0, 00:15:11.191 "md_size": 0, 00:15:11.191 "dif_type": 0, 00:15:11.191 "dif_is_head_of_md": false, 00:15:11.191 "dif_pi_format": 0 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "bdev_wait_for_examine" 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "subsystem": "nbd", 00:15:11.191 "config": [] 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "subsystem": "scheduler", 00:15:11.191 "config": [ 00:15:11.191 { 00:15:11.191 "method": "framework_set_scheduler", 00:15:11.191 "params": { 00:15:11.191 "name": "static" 00:15:11.191 } 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "subsystem": "nvmf", 00:15:11.191 "config": [ 00:15:11.191 { 00:15:11.191 "method": "nvmf_set_config", 00:15:11.191 "params": { 00:15:11.191 "discovery_filter": "match_any", 00:15:11.191 "admin_cmd_passthru": { 00:15:11.191 "identify_ctrlr": false 00:15:11.191 }, 00:15:11.191 "dhchap_digests": [ 00:15:11.191 "sha256", 00:15:11.191 "sha384", 00:15:11.191 "sha512" 00:15:11.191 ], 00:15:11.191 "dhchap_dhgroups": [ 00:15:11.191 "null", 00:15:11.191 "ffdhe2048", 00:15:11.191 "ffdhe3072", 00:15:11.191 "ffdhe4096", 00:15:11.191 "ffdhe6144", 00:15:11.191 "ffdhe8192" 00:15:11.191 ] 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_set_max_subsystems", 00:15:11.191 "params": { 00:15:11.191 "max_subsystems": 1024 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_set_crdt", 00:15:11.191 "params": { 00:15:11.191 "crdt1": 0, 00:15:11.191 "crdt2": 0, 00:15:11.191 "crdt3": 0 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_create_transport", 00:15:11.191 "params": { 00:15:11.191 "trtype": "TCP", 00:15:11.191 "max_queue_depth": 128, 00:15:11.191 "max_io_qpairs_per_ctrlr": 127, 00:15:11.191 "in_capsule_data_size": 4096, 00:15:11.191 "max_io_size": 131072, 00:15:11.191 "io_unit_size": 131072, 00:15:11.191 "max_aq_depth": 128, 00:15:11.191 "num_shared_buffers": 511, 00:15:11.191 "buf_cache_size": 4294967295, 00:15:11.191 "dif_insert_or_strip": false, 00:15:11.191 "zcopy": false, 00:15:11.191 "c2h_success": false, 00:15:11.191 "sock_priority": 0, 00:15:11.191 "abort_timeout_sec": 1, 00:15:11.191 "ack_timeout": 0, 00:15:11.191 "data_wr_pool_size": 0 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_create_subsystem", 00:15:11.191 "params": { 00:15:11.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.191 "allow_any_host": false, 00:15:11.191 "serial_number": "SPDK00000000000001", 00:15:11.191 "model_number": "SPDK bdev Controller", 00:15:11.191 "max_namespaces": 10, 00:15:11.191 "min_cntlid": 1, 00:15:11.191 "max_cntlid": 65519, 00:15:11.191 "ana_reporting": false 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_subsystem_add_host", 00:15:11.191 "params": { 00:15:11.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.191 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.191 "psk": "key0" 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_subsystem_add_ns", 00:15:11.191 "params": { 00:15:11.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.191 "namespace": { 00:15:11.191 "nsid": 1, 00:15:11.191 "bdev_name": "malloc0", 00:15:11.191 "nguid": "333C07792B95497E8B0A4AA4CF4A0E86", 00:15:11.191 "uuid": "333c0779-2b95-497e-8b0a-4aa4cf4a0e86", 00:15:11.191 "no_auto_visible": false 00:15:11.191 } 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "method": "nvmf_subsystem_add_listener", 00:15:11.191 "params": { 00:15:11.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.191 "listen_address": { 00:15:11.191 "trtype": "TCP", 00:15:11.191 "adrfam": "IPv4", 00:15:11.191 "traddr": "10.0.0.3", 00:15:11.191 "trsvcid": "4420" 00:15:11.191 }, 00:15:11.191 "secure_channel": true 00:15:11.191 } 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 }' 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85632 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85632 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85632 ']' 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.191 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.192 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.192 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.192 [2024-12-16 14:31:03.306455] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:11.192 [2024-12-16 14:31:03.306571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.468 [2024-12-16 14:31:03.456599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.468 [2024-12-16 14:31:03.476256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.468 [2024-12-16 14:31:03.476356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.468 [2024-12-16 14:31:03.476382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.468 [2024-12-16 14:31:03.476389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.468 [2024-12-16 14:31:03.476395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.468 [2024-12-16 14:31:03.476727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.468 [2024-12-16 14:31:03.620987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.727 [2024-12-16 14:31:03.678012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.727 [2024-12-16 14:31:03.709982] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:11.727 [2024-12-16 14:31:03.710185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=85667 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 85667 /var/tmp/bdevperf.sock 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85667 ']' 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:12.296 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:12.296 "subsystems": [ 00:15:12.296 { 00:15:12.296 "subsystem": "keyring", 00:15:12.296 "config": [ 00:15:12.296 { 00:15:12.296 "method": "keyring_file_add_key", 00:15:12.296 "params": { 00:15:12.296 "name": "key0", 00:15:12.296 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:12.296 } 00:15:12.296 } 00:15:12.296 ] 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "subsystem": "iobuf", 00:15:12.296 "config": [ 00:15:12.296 { 00:15:12.296 "method": "iobuf_set_options", 00:15:12.296 "params": { 00:15:12.296 "small_pool_count": 8192, 00:15:12.296 "large_pool_count": 1024, 00:15:12.296 "small_bufsize": 8192, 00:15:12.296 "large_bufsize": 135168, 00:15:12.296 "enable_numa": false 00:15:12.296 } 00:15:12.296 } 00:15:12.296 ] 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "subsystem": "sock", 00:15:12.296 "config": [ 00:15:12.296 { 00:15:12.296 "method": "sock_set_default_impl", 00:15:12.296 "params": { 00:15:12.296 "impl_name": "uring" 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "sock_impl_set_options", 00:15:12.296 "params": { 00:15:12.296 "impl_name": "ssl", 00:15:12.296 "recv_buf_size": 4096, 00:15:12.296 "send_buf_size": 4096, 00:15:12.296 "enable_recv_pipe": true, 00:15:12.296 "enable_quickack": false, 00:15:12.296 "enable_placement_id": 0, 00:15:12.296 "enable_zerocopy_send_server": true, 00:15:12.296 "enable_zerocopy_send_client": false, 00:15:12.296 "zerocopy_threshold": 0, 00:15:12.296 "tls_version": 0, 00:15:12.296 "enable_ktls": false 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "sock_impl_set_options", 00:15:12.296 "params": { 00:15:12.296 "impl_name": "posix", 00:15:12.296 "recv_buf_size": 2097152, 00:15:12.296 "send_buf_size": 2097152, 00:15:12.296 "enable_recv_pipe": true, 00:15:12.296 "enable_quickack": false, 00:15:12.296 "enable_placement_id": 0, 00:15:12.296 "enable_zerocopy_send_server": true, 00:15:12.296 "enable_zerocopy_send_client": false, 00:15:12.296 "zerocopy_threshold": 0, 00:15:12.296 "tls_version": 0, 00:15:12.296 "enable_ktls": false 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "sock_impl_set_options", 00:15:12.296 "params": { 00:15:12.296 "impl_name": "uring", 00:15:12.296 "recv_buf_size": 2097152, 00:15:12.296 "send_buf_size": 2097152, 00:15:12.296 "enable_recv_pipe": true, 00:15:12.296 "enable_quickack": false, 00:15:12.296 "enable_placement_id": 0, 00:15:12.296 "enable_zerocopy_send_server": false, 00:15:12.296 "enable_zerocopy_send_client": false, 00:15:12.296 "zerocopy_threshold": 0, 00:15:12.296 "tls_version": 0, 00:15:12.296 "enable_ktls": false 00:15:12.296 } 00:15:12.296 } 00:15:12.296 ] 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "subsystem": "vmd", 00:15:12.296 "config": [] 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "subsystem": "accel", 00:15:12.296 "config": [ 00:15:12.296 { 00:15:12.296 "method": "accel_set_options", 00:15:12.296 "params": { 00:15:12.296 "small_cache_size": 128, 00:15:12.296 "large_cache_size": 16, 00:15:12.296 "task_count": 2048, 00:15:12.296 "sequence_count": 2048, 00:15:12.296 "buf_count": 2048 00:15:12.296 } 00:15:12.296 } 00:15:12.296 ] 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "subsystem": "bdev", 00:15:12.296 "config": [ 00:15:12.296 { 00:15:12.296 "method": "bdev_set_options", 00:15:12.296 "params": { 00:15:12.296 "bdev_io_pool_size": 65535, 00:15:12.296 "bdev_io_cache_size": 256, 00:15:12.296 "bdev_auto_examine": true, 00:15:12.296 "iobuf_small_cache_size": 128, 00:15:12.296 "iobuf_large_cache_size": 16 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "bdev_raid_set_options", 00:15:12.296 "params": { 00:15:12.296 "process_window_size_kb": 1024, 00:15:12.296 "process_max_bandwidth_mb_sec": 0 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "bdev_iscsi_set_options", 00:15:12.296 "params": { 00:15:12.296 "timeout_sec": 30 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "bdev_nvme_set_options", 00:15:12.296 "params": { 00:15:12.296 "action_on_timeout": "none", 00:15:12.296 "timeout_us": 0, 00:15:12.296 "timeout_admin_us": 0, 00:15:12.296 "keep_alive_timeout_ms": 10000, 00:15:12.296 "arbitration_burst": 0, 00:15:12.296 "low_priority_weight": 0, 00:15:12.296 "medium_priority_weight": 0, 00:15:12.296 "high_priority_weight": 0, 00:15:12.296 "nvme_adminq_poll_period_us": 10000, 00:15:12.296 "nvme_ioq_poll_period_us": 0, 00:15:12.296 "io_queue_requests": 512, 00:15:12.296 "delay_cmd_submit": true, 00:15:12.296 "transport_retry_count": 4, 00:15:12.296 "bdev_retry_count": 3, 00:15:12.296 "transport_ack_timeout": 0, 00:15:12.296 "ctrlr_loss_timeout_sec": 0, 00:15:12.296 "reconnect_delay_sec": 0, 00:15:12.296 "fast_io_fail_timeout_sec": 0, 00:15:12.296 "disable_auto_failback": false, 00:15:12.296 "generate_uuids": false, 00:15:12.296 "transport_tos": 0, 00:15:12.296 "nvme_error_stat": false, 00:15:12.296 "rdma_srq_size": 0, 00:15:12.296 "io_path_stat": false, 00:15:12.296 "allow_accel_sequence": false, 00:15:12.296 "rdma_max_cq_size": 0, 00:15:12.296 "rdma_cm_event_timeout_ms": 0, 00:15:12.296 "dhchap_digests": [ 00:15:12.296 "sha256", 00:15:12.296 "sha384", 00:15:12.296 "sha512" 00:15:12.296 ], 00:15:12.296 "dhchap_dhgroups": [ 00:15:12.296 "null", 00:15:12.296 "ffdhe2048", 00:15:12.296 "ffdhe3072", 00:15:12.296 "ffdhe4096", 00:15:12.296 "ffdhe6144", 00:15:12.296 "ffdhe8192" 00:15:12.296 ], 00:15:12.296 "rdma_umr_per_io": false 00:15:12.296 } 00:15:12.296 }, 00:15:12.296 { 00:15:12.296 "method": "bdev_nvme_attach_controller", 00:15:12.296 "params": { 00:15:12.296 "name": "TLSTEST", 00:15:12.296 "trtype": "TCP", 00:15:12.296 "adrfam": "IPv4", 00:15:12.296 "traddr": "10.0.0.3", 00:15:12.296 "trsvcid": "4420", 00:15:12.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.296 "prchk_reftag": false, 00:15:12.296 "prchk_guard": false, 00:15:12.296 "ctrlr_loss_timeout_sec": 0, 00:15:12.296 "reconnect_delay_sec": 0, 00:15:12.296 "fast_io_fail_timeout_sec": 0, 00:15:12.296 "psk": "key0", 00:15:12.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.297 "hdgst": false, 00:15:12.297 "ddgst": false, 00:15:12.297 "multipath": "multipath" 00:15:12.297 } 00:15:12.297 }, 00:15:12.297 { 00:15:12.297 "method": "bdev_nvme_set_hotplug", 00:15:12.297 "params": { 00:15:12.297 "period_us": 100000, 00:15:12.297 "enable": false 00:15:12.297 } 00:15:12.297 }, 00:15:12.297 { 00:15:12.297 "method": "bdev_wait_for_examine" 00:15:12.297 } 00:15:12.297 ] 00:15:12.297 }, 00:15:12.297 { 00:15:12.297 "subsystem": "nbd", 00:15:12.297 "config": [] 00:15:12.297 } 00:15:12.297 ] 00:15:12.297 }' 00:15:12.297 [2024-12-16 14:31:04.392489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:12.297 [2024-12-16 14:31:04.392601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85667 ] 00:15:12.555 [2024-12-16 14:31:04.547536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.555 [2024-12-16 14:31:04.574090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.555 [2024-12-16 14:31:04.692174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.555 [2024-12-16 14:31:04.724241] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.492 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.492 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:13.492 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:13.492 Running I/O for 10 seconds... 00:15:15.367 3966.00 IOPS, 15.49 MiB/s [2024-12-16T14:31:08.944Z] 3993.00 IOPS, 15.60 MiB/s [2024-12-16T14:31:09.511Z] 4017.00 IOPS, 15.69 MiB/s [2024-12-16T14:31:10.889Z] 4030.00 IOPS, 15.74 MiB/s [2024-12-16T14:31:11.825Z] 4033.80 IOPS, 15.76 MiB/s [2024-12-16T14:31:12.761Z] 4033.83 IOPS, 15.76 MiB/s [2024-12-16T14:31:13.697Z] 4034.57 IOPS, 15.76 MiB/s [2024-12-16T14:31:14.634Z] 4032.38 IOPS, 15.75 MiB/s [2024-12-16T14:31:15.571Z] 4034.00 IOPS, 15.76 MiB/s [2024-12-16T14:31:15.571Z] 4034.10 IOPS, 15.76 MiB/s 00:15:23.371 Latency(us) 00:15:23.371 [2024-12-16T14:31:15.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.371 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:23.371 Verification LBA range: start 0x0 length 0x2000 00:15:23.371 TLSTESTn1 : 10.02 4039.80 15.78 0.00 0.00 31626.78 5391.83 24784.52 00:15:23.371 [2024-12-16T14:31:15.571Z] =================================================================================================================== 00:15:23.371 [2024-12-16T14:31:15.571Z] Total : 4039.80 15.78 0.00 0.00 31626.78 5391.83 24784.52 00:15:23.371 { 00:15:23.371 "results": [ 00:15:23.371 { 00:15:23.371 "job": "TLSTESTn1", 00:15:23.371 "core_mask": "0x4", 00:15:23.371 "workload": "verify", 00:15:23.371 "status": "finished", 00:15:23.371 "verify_range": { 00:15:23.371 "start": 0, 00:15:23.371 "length": 8192 00:15:23.371 }, 00:15:23.371 "queue_depth": 128, 00:15:23.371 "io_size": 4096, 00:15:23.371 "runtime": 10.016575, 00:15:23.371 "iops": 4039.804024828846, 00:15:23.371 "mibps": 15.78048447198768, 00:15:23.371 "io_failed": 0, 00:15:23.371 "io_timeout": 0, 00:15:23.371 "avg_latency_us": 31626.777093020908, 00:15:23.371 "min_latency_us": 5391.825454545455, 00:15:23.371 "max_latency_us": 24784.523636363636 00:15:23.371 } 00:15:23.371 ], 00:15:23.371 "core_count": 1 00:15:23.371 } 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 85667 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85667 ']' 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85667 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.371 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85667 00:15:23.630 killing process with pid 85667 00:15:23.630 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.630 00:15:23.630 Latency(us) 00:15:23.630 [2024-12-16T14:31:15.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.630 [2024-12-16T14:31:15.830Z] =================================================================================================================== 00:15:23.630 [2024-12-16T14:31:15.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85667' 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85667 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85667 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 85632 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85632 ']' 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85632 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85632 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.630 killing process with pid 85632 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85632' 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85632 00:15:23.630 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85632 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85806 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85806 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85806 ']' 00:15:23.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.910 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.910 [2024-12-16 14:31:15.959136] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:23.910 [2024-12-16 14:31:15.959239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.174 [2024-12-16 14:31:16.110483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.174 [2024-12-16 14:31:16.135169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.174 [2024-12-16 14:31:16.135230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.174 [2024-12-16 14:31:16.135244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.174 [2024-12-16 14:31:16.135260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.174 [2024-12-16 14:31:16.135279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.174 [2024-12-16 14:31:16.135710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.174 [2024-12-16 14:31:16.172030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zEXsJxKXHj 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zEXsJxKXHj 00:15:24.174 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:24.433 [2024-12-16 14:31:16.521319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.433 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:24.692 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:24.952 [2024-12-16 14:31:17.065494] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:24.952 [2024-12-16 14:31:17.065837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:24.952 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:25.211 malloc0 00:15:25.211 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:25.470 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:25.729 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85854 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85854 /var/tmp/bdevperf.sock 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85854 ']' 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.988 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.988 [2024-12-16 14:31:18.172835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:25.988 [2024-12-16 14:31:18.172970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85854 ] 00:15:26.247 [2024-12-16 14:31:18.326723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.247 [2024-12-16 14:31:18.352748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.247 [2024-12-16 14:31:18.388299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.247 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.247 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:26.247 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:26.509 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:26.767 [2024-12-16 14:31:18.932559] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.026 nvme0n1 00:15:27.026 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.026 Running I/O for 1 seconds... 00:15:27.963 3859.00 IOPS, 15.07 MiB/s 00:15:27.963 Latency(us) 00:15:27.963 [2024-12-16T14:31:20.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.963 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.963 Verification LBA range: start 0x0 length 0x2000 00:15:27.963 nvme0n1 : 1.02 3921.38 15.32 0.00 0.00 32274.79 1697.98 24546.21 00:15:27.963 [2024-12-16T14:31:20.163Z] =================================================================================================================== 00:15:27.963 [2024-12-16T14:31:20.163Z] Total : 3921.38 15.32 0.00 0.00 32274.79 1697.98 24546.21 00:15:27.963 { 00:15:27.963 "results": [ 00:15:27.963 { 00:15:27.963 "job": "nvme0n1", 00:15:27.963 "core_mask": "0x2", 00:15:27.963 "workload": "verify", 00:15:27.963 "status": "finished", 00:15:27.963 "verify_range": { 00:15:27.963 "start": 0, 00:15:27.963 "length": 8192 00:15:27.963 }, 00:15:27.963 "queue_depth": 128, 00:15:27.963 "io_size": 4096, 00:15:27.963 "runtime": 1.016734, 00:15:27.963 "iops": 3921.3796332177344, 00:15:27.963 "mibps": 15.317889192256775, 00:15:27.963 "io_failed": 0, 00:15:27.963 "io_timeout": 0, 00:15:27.963 "avg_latency_us": 32274.785824839815, 00:15:27.963 "min_latency_us": 1697.9781818181818, 00:15:27.963 "max_latency_us": 24546.21090909091 00:15:27.963 } 00:15:27.963 ], 00:15:27.963 "core_count": 1 00:15:27.963 } 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85854 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85854 ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85854 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85854 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:28.223 killing process with pid 85854 00:15:28.223 Received shutdown signal, test time was about 1.000000 seconds 00:15:28.223 00:15:28.223 Latency(us) 00:15:28.223 [2024-12-16T14:31:20.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.223 [2024-12-16T14:31:20.423Z] =================================================================================================================== 00:15:28.223 [2024-12-16T14:31:20.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85854' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85854 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85854 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85806 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85806 ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85806 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85806 00:15:28.223 killing process with pid 85806 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85806' 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85806 00:15:28.223 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85806 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85892 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85892 00:15:28.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85892 ']' 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.482 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.482 [2024-12-16 14:31:20.549112] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:28.483 [2024-12-16 14:31:20.549365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.742 [2024-12-16 14:31:20.694062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.742 [2024-12-16 14:31:20.712795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.742 [2024-12-16 14:31:20.713123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.742 [2024-12-16 14:31:20.713257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.742 [2024-12-16 14:31:20.713394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.742 [2024-12-16 14:31:20.713427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.742 [2024-12-16 14:31:20.713820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.742 [2024-12-16 14:31:20.742508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.742 [2024-12-16 14:31:20.867189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.742 malloc0 00:15:28.742 [2024-12-16 14:31:20.893284] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.742 [2024-12-16 14:31:20.893670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85917 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85917 /var/tmp/bdevperf.sock 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85917 ']' 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.742 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.001 [2024-12-16 14:31:20.981022] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:29.001 [2024-12-16 14:31:20.981131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85917 ] 00:15:29.001 [2024-12-16 14:31:21.125314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.001 [2024-12-16 14:31:21.144252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.001 [2024-12-16 14:31:21.173844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.260 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.260 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:29.260 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zEXsJxKXHj 00:15:29.519 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:29.778 [2024-12-16 14:31:21.762160] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.778 nvme0n1 00:15:29.778 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.037 Running I/O for 1 seconds... 00:15:30.974 4599.00 IOPS, 17.96 MiB/s 00:15:30.974 Latency(us) 00:15:30.974 [2024-12-16T14:31:23.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.974 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:30.974 Verification LBA range: start 0x0 length 0x2000 00:15:30.974 nvme0n1 : 1.03 4602.00 17.98 0.00 0.00 27526.87 8519.68 21090.68 00:15:30.974 [2024-12-16T14:31:23.174Z] =================================================================================================================== 00:15:30.974 [2024-12-16T14:31:23.174Z] Total : 4602.00 17.98 0.00 0.00 27526.87 8519.68 21090.68 00:15:30.974 { 00:15:30.974 "results": [ 00:15:30.974 { 00:15:30.974 "job": "nvme0n1", 00:15:30.974 "core_mask": "0x2", 00:15:30.974 "workload": "verify", 00:15:30.974 "status": "finished", 00:15:30.974 "verify_range": { 00:15:30.974 "start": 0, 00:15:30.974 "length": 8192 00:15:30.974 }, 00:15:30.974 "queue_depth": 128, 00:15:30.974 "io_size": 4096, 00:15:30.974 "runtime": 1.027162, 00:15:30.974 "iops": 4602.000463412782, 00:15:30.974 "mibps": 17.97656431020618, 00:15:30.974 "io_failed": 0, 00:15:30.974 "io_timeout": 0, 00:15:30.974 "avg_latency_us": 27526.871928765122, 00:15:30.974 "min_latency_us": 8519.68, 00:15:30.974 "max_latency_us": 21090.676363636365 00:15:30.974 } 00:15:30.974 ], 00:15:30.974 "core_count": 1 00:15:30.974 } 00:15:30.974 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:30.974 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.974 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.974 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.974 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:30.974 "subsystems": [ 00:15:30.974 { 00:15:30.974 "subsystem": "keyring", 00:15:30.974 "config": [ 00:15:30.974 { 00:15:30.974 "method": "keyring_file_add_key", 00:15:30.974 "params": { 00:15:30.974 "name": "key0", 00:15:30.974 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:30.974 } 00:15:30.974 } 00:15:30.974 ] 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "subsystem": "iobuf", 00:15:30.974 "config": [ 00:15:30.974 { 00:15:30.974 "method": "iobuf_set_options", 00:15:30.974 "params": { 00:15:30.974 "small_pool_count": 8192, 00:15:30.974 "large_pool_count": 1024, 00:15:30.974 "small_bufsize": 8192, 00:15:30.974 "large_bufsize": 135168, 00:15:30.974 "enable_numa": false 00:15:30.974 } 00:15:30.974 } 00:15:30.974 ] 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "subsystem": "sock", 00:15:30.974 "config": [ 00:15:30.974 { 00:15:30.974 "method": "sock_set_default_impl", 00:15:30.974 "params": { 00:15:30.974 "impl_name": "uring" 00:15:30.974 } 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "method": "sock_impl_set_options", 00:15:30.974 "params": { 00:15:30.974 "impl_name": "ssl", 00:15:30.974 "recv_buf_size": 4096, 00:15:30.974 "send_buf_size": 4096, 00:15:30.974 "enable_recv_pipe": true, 00:15:30.974 "enable_quickack": false, 00:15:30.974 "enable_placement_id": 0, 00:15:30.974 "enable_zerocopy_send_server": true, 00:15:30.974 "enable_zerocopy_send_client": false, 00:15:30.974 "zerocopy_threshold": 0, 00:15:30.974 "tls_version": 0, 00:15:30.974 "enable_ktls": false 00:15:30.974 } 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "method": "sock_impl_set_options", 00:15:30.974 "params": { 00:15:30.974 "impl_name": "posix", 00:15:30.974 "recv_buf_size": 2097152, 00:15:30.974 "send_buf_size": 2097152, 00:15:30.974 "enable_recv_pipe": true, 00:15:30.974 "enable_quickack": false, 00:15:30.974 "enable_placement_id": 0, 00:15:30.974 "enable_zerocopy_send_server": true, 00:15:30.974 "enable_zerocopy_send_client": false, 00:15:30.974 "zerocopy_threshold": 0, 00:15:30.974 "tls_version": 0, 00:15:30.974 "enable_ktls": false 00:15:30.974 } 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "method": "sock_impl_set_options", 00:15:30.974 "params": { 00:15:30.974 "impl_name": "uring", 00:15:30.974 "recv_buf_size": 2097152, 00:15:30.974 "send_buf_size": 2097152, 00:15:30.974 "enable_recv_pipe": true, 00:15:30.974 "enable_quickack": false, 00:15:30.974 "enable_placement_id": 0, 00:15:30.974 "enable_zerocopy_send_server": false, 00:15:30.974 "enable_zerocopy_send_client": false, 00:15:30.974 "zerocopy_threshold": 0, 00:15:30.974 "tls_version": 0, 00:15:30.974 "enable_ktls": false 00:15:30.974 } 00:15:30.974 } 00:15:30.974 ] 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "subsystem": "vmd", 00:15:30.974 "config": [] 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "subsystem": "accel", 00:15:30.974 "config": [ 00:15:30.974 { 00:15:30.974 "method": "accel_set_options", 00:15:30.974 "params": { 00:15:30.974 "small_cache_size": 128, 00:15:30.974 "large_cache_size": 16, 00:15:30.974 "task_count": 2048, 00:15:30.974 "sequence_count": 2048, 00:15:30.974 "buf_count": 2048 00:15:30.974 } 00:15:30.974 } 00:15:30.974 ] 00:15:30.974 }, 00:15:30.974 { 00:15:30.974 "subsystem": "bdev", 00:15:30.974 "config": [ 00:15:30.974 { 00:15:30.974 "method": "bdev_set_options", 00:15:30.974 "params": { 00:15:30.974 "bdev_io_pool_size": 65535, 00:15:30.974 "bdev_io_cache_size": 256, 00:15:30.974 "bdev_auto_examine": true, 00:15:30.974 "iobuf_small_cache_size": 128, 00:15:30.974 "iobuf_large_cache_size": 16 00:15:30.974 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_raid_set_options", 00:15:30.975 "params": { 00:15:30.975 "process_window_size_kb": 1024, 00:15:30.975 "process_max_bandwidth_mb_sec": 0 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_iscsi_set_options", 00:15:30.975 "params": { 00:15:30.975 "timeout_sec": 30 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_nvme_set_options", 00:15:30.975 "params": { 00:15:30.975 "action_on_timeout": "none", 00:15:30.975 "timeout_us": 0, 00:15:30.975 "timeout_admin_us": 0, 00:15:30.975 "keep_alive_timeout_ms": 10000, 00:15:30.975 "arbitration_burst": 0, 00:15:30.975 "low_priority_weight": 0, 00:15:30.975 "medium_priority_weight": 0, 00:15:30.975 "high_priority_weight": 0, 00:15:30.975 "nvme_adminq_poll_period_us": 10000, 00:15:30.975 "nvme_ioq_poll_period_us": 0, 00:15:30.975 "io_queue_requests": 0, 00:15:30.975 "delay_cmd_submit": true, 00:15:30.975 "transport_retry_count": 4, 00:15:30.975 "bdev_retry_count": 3, 00:15:30.975 "transport_ack_timeout": 0, 00:15:30.975 "ctrlr_loss_timeout_sec": 0, 00:15:30.975 "reconnect_delay_sec": 0, 00:15:30.975 "fast_io_fail_timeout_sec": 0, 00:15:30.975 "disable_auto_failback": false, 00:15:30.975 "generate_uuids": false, 00:15:30.975 "transport_tos": 0, 00:15:30.975 "nvme_error_stat": false, 00:15:30.975 "rdma_srq_size": 0, 00:15:30.975 "io_path_stat": false, 00:15:30.975 "allow_accel_sequence": false, 00:15:30.975 "rdma_max_cq_size": 0, 00:15:30.975 "rdma_cm_event_timeout_ms": 0, 00:15:30.975 "dhchap_digests": [ 00:15:30.975 "sha256", 00:15:30.975 "sha384", 00:15:30.975 "sha512" 00:15:30.975 ], 00:15:30.975 "dhchap_dhgroups": [ 00:15:30.975 "null", 00:15:30.975 "ffdhe2048", 00:15:30.975 "ffdhe3072", 00:15:30.975 "ffdhe4096", 00:15:30.975 "ffdhe6144", 00:15:30.975 "ffdhe8192" 00:15:30.975 ], 00:15:30.975 "rdma_umr_per_io": false 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_nvme_set_hotplug", 00:15:30.975 "params": { 00:15:30.975 "period_us": 100000, 00:15:30.975 "enable": false 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_malloc_create", 00:15:30.975 "params": { 00:15:30.975 "name": "malloc0", 00:15:30.975 "num_blocks": 8192, 00:15:30.975 "block_size": 4096, 00:15:30.975 "physical_block_size": 4096, 00:15:30.975 "uuid": "4f5c65fc-e9ac-41ec-8e0b-c48dbaa8c6e7", 00:15:30.975 "optimal_io_boundary": 0, 00:15:30.975 "md_size": 0, 00:15:30.975 "dif_type": 0, 00:15:30.975 "dif_is_head_of_md": false, 00:15:30.975 "dif_pi_format": 0 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "bdev_wait_for_examine" 00:15:30.975 } 00:15:30.975 ] 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "subsystem": "nbd", 00:15:30.975 "config": [] 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "subsystem": "scheduler", 00:15:30.975 "config": [ 00:15:30.975 { 00:15:30.975 "method": "framework_set_scheduler", 00:15:30.975 "params": { 00:15:30.975 "name": "static" 00:15:30.975 } 00:15:30.975 } 00:15:30.975 ] 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "subsystem": "nvmf", 00:15:30.975 "config": [ 00:15:30.975 { 00:15:30.975 "method": "nvmf_set_config", 00:15:30.975 "params": { 00:15:30.975 "discovery_filter": "match_any", 00:15:30.975 "admin_cmd_passthru": { 00:15:30.975 "identify_ctrlr": false 00:15:30.975 }, 00:15:30.975 "dhchap_digests": [ 00:15:30.975 "sha256", 00:15:30.975 "sha384", 00:15:30.975 "sha512" 00:15:30.975 ], 00:15:30.975 "dhchap_dhgroups": [ 00:15:30.975 "null", 00:15:30.975 "ffdhe2048", 00:15:30.975 "ffdhe3072", 00:15:30.975 "ffdhe4096", 00:15:30.975 "ffdhe6144", 00:15:30.975 "ffdhe8192" 00:15:30.975 ] 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_set_max_subsystems", 00:15:30.975 "params": { 00:15:30.975 "max_subsystems": 1024 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_set_crdt", 00:15:30.975 "params": { 00:15:30.975 "crdt1": 0, 00:15:30.975 "crdt2": 0, 00:15:30.975 "crdt3": 0 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_create_transport", 00:15:30.975 "params": { 00:15:30.975 "trtype": "TCP", 00:15:30.975 "max_queue_depth": 128, 00:15:30.975 "max_io_qpairs_per_ctrlr": 127, 00:15:30.975 "in_capsule_data_size": 4096, 00:15:30.975 "max_io_size": 131072, 00:15:30.975 "io_unit_size": 131072, 00:15:30.975 "max_aq_depth": 128, 00:15:30.975 "num_shared_buffers": 511, 00:15:30.975 "buf_cache_size": 4294967295, 00:15:30.975 "dif_insert_or_strip": false, 00:15:30.975 "zcopy": false, 00:15:30.975 "c2h_success": false, 00:15:30.975 "sock_priority": 0, 00:15:30.975 "abort_timeout_sec": 1, 00:15:30.975 "ack_timeout": 0, 00:15:30.975 "data_wr_pool_size": 0 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_create_subsystem", 00:15:30.975 "params": { 00:15:30.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.975 "allow_any_host": false, 00:15:30.975 "serial_number": "00000000000000000000", 00:15:30.975 "model_number": "SPDK bdev Controller", 00:15:30.975 "max_namespaces": 32, 00:15:30.975 "min_cntlid": 1, 00:15:30.975 "max_cntlid": 65519, 00:15:30.975 "ana_reporting": false 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_subsystem_add_host", 00:15:30.975 "params": { 00:15:30.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.975 "host": "nqn.2016-06.io.spdk:host1", 00:15:30.975 "psk": "key0" 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_subsystem_add_ns", 00:15:30.975 "params": { 00:15:30.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.975 "namespace": { 00:15:30.975 "nsid": 1, 00:15:30.975 "bdev_name": "malloc0", 00:15:30.975 "nguid": "4F5C65FCE9AC41EC8E0BC48DBAA8C6E7", 00:15:30.975 "uuid": "4f5c65fc-e9ac-41ec-8e0b-c48dbaa8c6e7", 00:15:30.975 "no_auto_visible": false 00:15:30.975 } 00:15:30.975 } 00:15:30.975 }, 00:15:30.975 { 00:15:30.975 "method": "nvmf_subsystem_add_listener", 00:15:30.975 "params": { 00:15:30.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.975 "listen_address": { 00:15:30.975 "trtype": "TCP", 00:15:30.975 "adrfam": "IPv4", 00:15:30.975 "traddr": "10.0.0.3", 00:15:30.975 "trsvcid": "4420" 00:15:30.975 }, 00:15:30.975 "secure_channel": false, 00:15:30.975 "sock_impl": "ssl" 00:15:30.975 } 00:15:30.975 } 00:15:30.975 ] 00:15:30.975 } 00:15:30.975 ] 00:15:30.975 }' 00:15:30.975 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:31.544 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:31.544 "subsystems": [ 00:15:31.544 { 00:15:31.544 "subsystem": "keyring", 00:15:31.544 "config": [ 00:15:31.544 { 00:15:31.544 "method": "keyring_file_add_key", 00:15:31.544 "params": { 00:15:31.544 "name": "key0", 00:15:31.544 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:31.544 } 00:15:31.544 } 00:15:31.544 ] 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "subsystem": "iobuf", 00:15:31.544 "config": [ 00:15:31.544 { 00:15:31.544 "method": "iobuf_set_options", 00:15:31.544 "params": { 00:15:31.544 "small_pool_count": 8192, 00:15:31.544 "large_pool_count": 1024, 00:15:31.544 "small_bufsize": 8192, 00:15:31.544 "large_bufsize": 135168, 00:15:31.544 "enable_numa": false 00:15:31.544 } 00:15:31.544 } 00:15:31.544 ] 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "subsystem": "sock", 00:15:31.544 "config": [ 00:15:31.544 { 00:15:31.544 "method": "sock_set_default_impl", 00:15:31.544 "params": { 00:15:31.544 "impl_name": "uring" 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "sock_impl_set_options", 00:15:31.544 "params": { 00:15:31.544 "impl_name": "ssl", 00:15:31.544 "recv_buf_size": 4096, 00:15:31.544 "send_buf_size": 4096, 00:15:31.544 "enable_recv_pipe": true, 00:15:31.544 "enable_quickack": false, 00:15:31.544 "enable_placement_id": 0, 00:15:31.544 "enable_zerocopy_send_server": true, 00:15:31.544 "enable_zerocopy_send_client": false, 00:15:31.544 "zerocopy_threshold": 0, 00:15:31.544 "tls_version": 0, 00:15:31.544 "enable_ktls": false 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "sock_impl_set_options", 00:15:31.544 "params": { 00:15:31.544 "impl_name": "posix", 00:15:31.544 "recv_buf_size": 2097152, 00:15:31.544 "send_buf_size": 2097152, 00:15:31.544 "enable_recv_pipe": true, 00:15:31.544 "enable_quickack": false, 00:15:31.544 "enable_placement_id": 0, 00:15:31.544 "enable_zerocopy_send_server": true, 00:15:31.544 "enable_zerocopy_send_client": false, 00:15:31.544 "zerocopy_threshold": 0, 00:15:31.544 "tls_version": 0, 00:15:31.544 "enable_ktls": false 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "sock_impl_set_options", 00:15:31.544 "params": { 00:15:31.544 "impl_name": "uring", 00:15:31.544 "recv_buf_size": 2097152, 00:15:31.544 "send_buf_size": 2097152, 00:15:31.544 "enable_recv_pipe": true, 00:15:31.544 "enable_quickack": false, 00:15:31.544 "enable_placement_id": 0, 00:15:31.544 "enable_zerocopy_send_server": false, 00:15:31.544 "enable_zerocopy_send_client": false, 00:15:31.544 "zerocopy_threshold": 0, 00:15:31.544 "tls_version": 0, 00:15:31.544 "enable_ktls": false 00:15:31.544 } 00:15:31.544 } 00:15:31.544 ] 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "subsystem": "vmd", 00:15:31.544 "config": [] 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "subsystem": "accel", 00:15:31.544 "config": [ 00:15:31.544 { 00:15:31.544 "method": "accel_set_options", 00:15:31.544 "params": { 00:15:31.544 "small_cache_size": 128, 00:15:31.544 "large_cache_size": 16, 00:15:31.544 "task_count": 2048, 00:15:31.544 "sequence_count": 2048, 00:15:31.544 "buf_count": 2048 00:15:31.544 } 00:15:31.544 } 00:15:31.544 ] 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "subsystem": "bdev", 00:15:31.544 "config": [ 00:15:31.544 { 00:15:31.544 "method": "bdev_set_options", 00:15:31.544 "params": { 00:15:31.544 "bdev_io_pool_size": 65535, 00:15:31.544 "bdev_io_cache_size": 256, 00:15:31.544 "bdev_auto_examine": true, 00:15:31.544 "iobuf_small_cache_size": 128, 00:15:31.544 "iobuf_large_cache_size": 16 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "bdev_raid_set_options", 00:15:31.544 "params": { 00:15:31.544 "process_window_size_kb": 1024, 00:15:31.544 "process_max_bandwidth_mb_sec": 0 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "bdev_iscsi_set_options", 00:15:31.544 "params": { 00:15:31.544 "timeout_sec": 30 00:15:31.544 } 00:15:31.544 }, 00:15:31.544 { 00:15:31.544 "method": "bdev_nvme_set_options", 00:15:31.544 "params": { 00:15:31.544 "action_on_timeout": "none", 00:15:31.544 "timeout_us": 0, 00:15:31.544 "timeout_admin_us": 0, 00:15:31.544 "keep_alive_timeout_ms": 10000, 00:15:31.544 "arbitration_burst": 0, 00:15:31.544 "low_priority_weight": 0, 00:15:31.544 "medium_priority_weight": 0, 00:15:31.544 "high_priority_weight": 0, 00:15:31.545 "nvme_adminq_poll_period_us": 10000, 00:15:31.545 "nvme_ioq_poll_period_us": 0, 00:15:31.545 "io_queue_requests": 512, 00:15:31.545 "delay_cmd_submit": true, 00:15:31.545 "transport_retry_count": 4, 00:15:31.545 "bdev_retry_count": 3, 00:15:31.545 "transport_ack_timeout": 0, 00:15:31.545 "ctrlr_loss_timeout_sec": 0, 00:15:31.545 "reconnect_delay_sec": 0, 00:15:31.545 "fast_io_fail_timeout_sec": 0, 00:15:31.545 "disable_auto_failback": false, 00:15:31.545 "generate_uuids": false, 00:15:31.545 "transport_tos": 0, 00:15:31.545 "nvme_error_stat": false, 00:15:31.545 "rdma_srq_size": 0, 00:15:31.545 "io_path_stat": false, 00:15:31.545 "allow_accel_sequence": false, 00:15:31.545 "rdma_max_cq_size": 0, 00:15:31.545 "rdma_cm_event_timeout_ms": 0, 00:15:31.545 "dhchap_digests": [ 00:15:31.545 "sha256", 00:15:31.545 "sha384", 00:15:31.545 "sha512" 00:15:31.545 ], 00:15:31.545 "dhchap_dhgroups": [ 00:15:31.545 "null", 00:15:31.545 "ffdhe2048", 00:15:31.545 "ffdhe3072", 00:15:31.545 "ffdhe4096", 00:15:31.545 "ffdhe6144", 00:15:31.545 "ffdhe8192" 00:15:31.545 ], 00:15:31.545 "rdma_umr_per_io": false 00:15:31.545 } 00:15:31.545 }, 00:15:31.545 { 00:15:31.545 "method": "bdev_nvme_attach_controller", 00:15:31.545 "params": { 00:15:31.545 "name": "nvme0", 00:15:31.545 "trtype": "TCP", 00:15:31.545 "adrfam": "IPv4", 00:15:31.545 "traddr": "10.0.0.3", 00:15:31.545 "trsvcid": "4420", 00:15:31.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.545 "prchk_reftag": false, 00:15:31.545 "prchk_guard": false, 00:15:31.545 "ctrlr_loss_timeout_sec": 0, 00:15:31.545 "reconnect_delay_sec": 0, 00:15:31.545 "fast_io_fail_timeout_sec": 0, 00:15:31.545 "psk": "key0", 00:15:31.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.545 "hdgst": false, 00:15:31.545 "ddgst": false, 00:15:31.545 "multipath": "multipath" 00:15:31.545 } 00:15:31.545 }, 00:15:31.545 { 00:15:31.545 "method": "bdev_nvme_set_hotplug", 00:15:31.545 "params": { 00:15:31.545 "period_us": 100000, 00:15:31.545 "enable": false 00:15:31.545 } 00:15:31.545 }, 00:15:31.545 { 00:15:31.545 "method": "bdev_enable_histogram", 00:15:31.545 "params": { 00:15:31.545 "name": "nvme0n1", 00:15:31.545 "enable": true 00:15:31.545 } 00:15:31.545 }, 00:15:31.545 { 00:15:31.545 "method": "bdev_wait_for_examine" 00:15:31.545 } 00:15:31.545 ] 00:15:31.545 }, 00:15:31.545 { 00:15:31.545 "subsystem": "nbd", 00:15:31.545 "config": [] 00:15:31.545 } 00:15:31.545 ] 00:15:31.545 }' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85917 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85917 ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85917 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85917 00:15:31.545 killing process with pid 85917 00:15:31.545 Received shutdown signal, test time was about 1.000000 seconds 00:15:31.545 00:15:31.545 Latency(us) 00:15:31.545 [2024-12-16T14:31:23.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.545 [2024-12-16T14:31:23.745Z] =================================================================================================================== 00:15:31.545 [2024-12-16T14:31:23.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85917' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85917 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85917 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85892 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85892 ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85892 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85892 00:15:31.545 killing process with pid 85892 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85892' 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85892 00:15:31.545 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85892 00:15:31.805 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:31.805 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.805 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.805 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:31.805 "subsystems": [ 00:15:31.805 { 00:15:31.805 "subsystem": "keyring", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "keyring_file_add_key", 00:15:31.805 "params": { 00:15:31.805 "name": "key0", 00:15:31.805 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:31.805 } 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "iobuf", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "iobuf_set_options", 00:15:31.805 "params": { 00:15:31.805 "small_pool_count": 8192, 00:15:31.805 "large_pool_count": 1024, 00:15:31.805 "small_bufsize": 8192, 00:15:31.805 "large_bufsize": 135168, 00:15:31.805 "enable_numa": false 00:15:31.805 } 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "sock", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "sock_set_default_impl", 00:15:31.805 "params": { 00:15:31.805 "impl_name": "uring" 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "sock_impl_set_options", 00:15:31.805 "params": { 00:15:31.805 "impl_name": "ssl", 00:15:31.805 "recv_buf_size": 4096, 00:15:31.805 "send_buf_size": 4096, 00:15:31.805 "enable_recv_pipe": true, 00:15:31.805 "enable_quickack": false, 00:15:31.805 "enable_placement_id": 0, 00:15:31.805 "enable_zerocopy_send_server": true, 00:15:31.805 "enable_zerocopy_send_client": false, 00:15:31.805 "zerocopy_threshold": 0, 00:15:31.805 "tls_version": 0, 00:15:31.805 "enable_ktls": false 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "sock_impl_set_options", 00:15:31.805 "params": { 00:15:31.805 "impl_name": "posix", 00:15:31.805 "recv_buf_size": 2097152, 00:15:31.805 "send_buf_size": 2097152, 00:15:31.805 "enable_recv_pipe": true, 00:15:31.805 "enable_quickack": false, 00:15:31.805 "enable_placement_id": 0, 00:15:31.805 "enable_zerocopy_send_server": true, 00:15:31.805 "enable_zerocopy_send_client": false, 00:15:31.805 "zerocopy_threshold": 0, 00:15:31.805 "tls_version": 0, 00:15:31.805 "enable_ktls": false 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "sock_impl_set_options", 00:15:31.805 "params": { 00:15:31.805 "impl_name": "uring", 00:15:31.805 "recv_buf_size": 2097152, 00:15:31.805 "send_buf_size": 2097152, 00:15:31.805 "enable_recv_pipe": true, 00:15:31.805 "enable_quickack": false, 00:15:31.805 "enable_placement_id": 0, 00:15:31.805 "enable_zerocopy_send_server": false, 00:15:31.805 "enable_zerocopy_send_client": false, 00:15:31.805 "zerocopy_threshold": 0, 00:15:31.805 "tls_version": 0, 00:15:31.805 "enable_ktls": false 00:15:31.805 } 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "vmd", 00:15:31.805 "config": [] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "accel", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "accel_set_options", 00:15:31.805 "params": { 00:15:31.805 "small_cache_size": 128, 00:15:31.805 "large_cache_size": 16, 00:15:31.805 "task_count": 2048, 00:15:31.805 "sequence_count": 2048, 00:15:31.805 "buf_count": 2048 00:15:31.805 } 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "bdev", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "bdev_set_options", 00:15:31.805 "params": { 00:15:31.805 "bdev_io_pool_size": 65535, 00:15:31.805 "bdev_io_cache_size": 256, 00:15:31.805 "bdev_auto_examine": true, 00:15:31.805 "iobuf_small_cache_size": 128, 00:15:31.805 "iobuf_large_cache_size": 16 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_raid_set_options", 00:15:31.805 "params": { 00:15:31.805 "process_window_size_kb": 1024, 00:15:31.805 "process_max_bandwidth_mb_sec": 0 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_iscsi_set_options", 00:15:31.805 "params": { 00:15:31.805 "timeout_sec": 30 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_nvme_set_options", 00:15:31.805 "params": { 00:15:31.805 "action_on_timeout": "none", 00:15:31.805 "timeout_us": 0, 00:15:31.805 "timeout_admin_us": 0, 00:15:31.805 "keep_alive_timeout_ms": 10000, 00:15:31.805 "arbitration_burst": 0, 00:15:31.805 "low_priority_weight": 0, 00:15:31.805 "medium_priority_weight": 0, 00:15:31.805 "high_priority_weight": 0, 00:15:31.805 "nvme_adminq_poll_period_us": 10000, 00:15:31.805 "nvme_ioq_poll_period_us": 0, 00:15:31.805 "io_queue_requests": 0, 00:15:31.805 "delay_cmd_submit": true, 00:15:31.805 "transport_retry_count": 4, 00:15:31.805 "bdev_retry_count": 3, 00:15:31.805 "transport_ack_timeout": 0, 00:15:31.805 "ctrlr_loss_timeout_sec": 0, 00:15:31.805 "reconnect_delay_sec": 0, 00:15:31.805 "fast_io_fail_timeout_sec": 0, 00:15:31.805 "disable_auto_failback": false, 00:15:31.805 "generate_uuids": false, 00:15:31.805 "transport_tos": 0, 00:15:31.805 "nvme_error_stat": false, 00:15:31.805 "rdma_srq_size": 0, 00:15:31.805 "io_path_stat": false, 00:15:31.805 "allow_accel_sequence": false, 00:15:31.805 "rdma_max_cq_size": 0, 00:15:31.805 "rdma_cm_event_timeout_ms": 0, 00:15:31.805 "dhchap_digests": [ 00:15:31.805 "sha256", 00:15:31.805 "sha384", 00:15:31.805 "sha512" 00:15:31.805 ], 00:15:31.805 "dhchap_dhgroups": [ 00:15:31.805 "null", 00:15:31.805 "ffdhe2048", 00:15:31.805 "ffdhe3072", 00:15:31.805 "ffdhe4096", 00:15:31.805 "ffdhe6144", 00:15:31.805 "ffdhe8192" 00:15:31.805 ], 00:15:31.805 "rdma_umr_per_io": false 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_nvme_set_hotplug", 00:15:31.805 "params": { 00:15:31.805 "period_us": 100000, 00:15:31.805 "enable": false 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_malloc_create", 00:15:31.805 "params": { 00:15:31.805 "name": "malloc0", 00:15:31.805 "num_blocks": 8192, 00:15:31.805 "block_size": 4096, 00:15:31.805 "physical_block_size": 4096, 00:15:31.805 "uuid": "4f5c65fc-e9ac-41ec-8e0b-c48dbaa8c6e7", 00:15:31.805 "optimal_io_boundary": 0, 00:15:31.805 "md_size": 0, 00:15:31.805 "dif_type": 0, 00:15:31.805 "dif_is_head_of_md": false, 00:15:31.805 "dif_pi_format": 0 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "bdev_wait_for_examine" 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "nbd", 00:15:31.805 "config": [] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "scheduler", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "framework_set_scheduler", 00:15:31.805 "params": { 00:15:31.805 "name": "static" 00:15:31.805 } 00:15:31.805 } 00:15:31.805 ] 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "subsystem": "nvmf", 00:15:31.805 "config": [ 00:15:31.805 { 00:15:31.805 "method": "nvmf_set_config", 00:15:31.805 "params": { 00:15:31.805 "discovery_filter": "match_any", 00:15:31.805 "admin_cmd_passthru": { 00:15:31.805 "identify_ctrlr": false 00:15:31.805 }, 00:15:31.805 "dhchap_digests": [ 00:15:31.805 "sha256", 00:15:31.805 "sha384", 00:15:31.805 "sha512" 00:15:31.805 ], 00:15:31.805 "dhchap_dhgroups": [ 00:15:31.805 "null", 00:15:31.805 "ffdhe2048", 00:15:31.805 "ffdhe3072", 00:15:31.805 "ffdhe4096", 00:15:31.805 "ffdhe6144", 00:15:31.805 "ffdhe8192" 00:15:31.805 ] 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "nvmf_set_max_subsystems", 00:15:31.805 "params": { 00:15:31.805 "max_subsystems": 1024 00:15:31.805 } 00:15:31.805 }, 00:15:31.805 { 00:15:31.805 "method": "nvmf_set_crdt", 00:15:31.805 "params": { 00:15:31.805 "crdt1": 0, 00:15:31.805 "crdt2": 0, 00:15:31.805 "crdt3": 0 00:15:31.805 } 00:15:31.805 }, 00:15:31.806 { 00:15:31.806 "method": "nvmf_create_transport", 00:15:31.806 "params": { 00:15:31.806 "trtype": "TCP", 00:15:31.806 "max_queue_depth": 128, 00:15:31.806 "max_io_qpairs_per_ctrlr": 127, 00:15:31.806 "in_capsule_data_size": 4096, 00:15:31.806 "max_io_size": 131072, 00:15:31.806 "io_unit_size": 131072, 00:15:31.806 "max_aq_depth": 128, 00:15:31.806 "num_shared_buffers": 511, 00:15:31.806 "buf_cache_size": 4294967295, 00:15:31.806 "dif_insert_or_strip": false, 00:15:31.806 "zcopy": false, 00:15:31.806 "c2h_success": false, 00:15:31.806 "sock_priority": 0, 00:15:31.806 "abort_timeout_sec": 1, 00:15:31.806 "ack_timeout": 0, 00:15:31.806 "data_wr_pool_size": 0 00:15:31.806 } 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "method": "nvmf_create_subsystem", 00:15:31.806 "params": { 00:15:31.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.806 "allow_any_host": false, 00:15:31.806 "serial_number": "00000000000000000000", 00:15:31.806 "model_number": "SPDK bdev Controller", 00:15:31.806 "max_namespaces": 32, 00:15:31.806 "min_cntlid": 1, 00:15:31.806 "max_cntlid": 65519, 00:15:31.806 "ana_reporting": false 00:15:31.806 } 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "method": "nvmf_subsystem_add_host", 00:15:31.806 "params": { 00:15:31.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.806 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.806 "psk": "key0" 00:15:31.806 } 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "method": "nvmf_subsystem_add_ns", 00:15:31.806 "params": { 00:15:31.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.806 "namespace": { 00:15:31.806 "nsid": 1, 00:15:31.806 "bdev_name": "malloc0", 00:15:31.806 "nguid": "4F5C65FCE9AC41EC8E0BC48DBAA8C6E7", 00:15:31.806 "uuid": "4f5c65fc-e9ac-41ec-8e0b-c48dbaa8c6e7", 00:15:31.806 "no_auto_visible": false 00:15:31.806 } 00:15:31.806 } 00:15:31.806 }, 00:15:31.806 { 00:15:31.806 "method": "nvmf_subsystem_add_listener", 00:15:31.806 "params": { 00:15:31.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.806 "listen_address": { 00:15:31.806 "trtype": "TCP", 00:15:31.806 "adrfam": "IPv4", 00:15:31.806 "traddr": "10.0.0.3", 00:15:31.806 "trsvcid": "4420" 00:15:31.806 }, 00:15:31.806 "secure_channel": false, 00:15:31.806 "sock_impl": "ssl" 00:15:31.806 } 00:15:31.806 } 00:15:31.806 ] 00:15:31.806 } 00:15:31.806 ] 00:15:31.806 }' 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85964 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85964 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85964 ']' 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.806 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.806 [2024-12-16 14:31:23.880587] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:31.806 [2024-12-16 14:31:23.880829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.065 [2024-12-16 14:31:24.018894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.065 [2024-12-16 14:31:24.038147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.065 [2024-12-16 14:31:24.038465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.065 [2024-12-16 14:31:24.038485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.065 [2024-12-16 14:31:24.038492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.065 [2024-12-16 14:31:24.038500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.065 [2024-12-16 14:31:24.038869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.065 [2024-12-16 14:31:24.179196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.065 [2024-12-16 14:31:24.235592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.323 [2024-12-16 14:31:24.267501] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.323 [2024-12-16 14:31:24.267745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85996 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85996 /var/tmp/bdevperf.sock 00:15:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85996 ']' 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:32.891 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:32.891 "subsystems": [ 00:15:32.891 { 00:15:32.891 "subsystem": "keyring", 00:15:32.891 "config": [ 00:15:32.891 { 00:15:32.891 "method": "keyring_file_add_key", 00:15:32.891 "params": { 00:15:32.891 "name": "key0", 00:15:32.891 "path": "/tmp/tmp.zEXsJxKXHj" 00:15:32.891 } 00:15:32.891 } 00:15:32.891 ] 00:15:32.891 }, 00:15:32.891 { 00:15:32.892 "subsystem": "iobuf", 00:15:32.892 "config": [ 00:15:32.892 { 00:15:32.892 "method": "iobuf_set_options", 00:15:32.892 "params": { 00:15:32.892 "small_pool_count": 8192, 00:15:32.892 "large_pool_count": 1024, 00:15:32.892 "small_bufsize": 8192, 00:15:32.892 "large_bufsize": 135168, 00:15:32.892 "enable_numa": false 00:15:32.892 } 00:15:32.892 } 00:15:32.892 ] 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "subsystem": "sock", 00:15:32.892 "config": [ 00:15:32.892 { 00:15:32.892 "method": "sock_set_default_impl", 00:15:32.892 "params": { 00:15:32.892 "impl_name": "uring" 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "sock_impl_set_options", 00:15:32.892 "params": { 00:15:32.892 "impl_name": "ssl", 00:15:32.892 "recv_buf_size": 4096, 00:15:32.892 "send_buf_size": 4096, 00:15:32.892 "enable_recv_pipe": true, 00:15:32.892 "enable_quickack": false, 00:15:32.892 "enable_placement_id": 0, 00:15:32.892 "enable_zerocopy_send_server": true, 00:15:32.892 "enable_zerocopy_send_client": false, 00:15:32.892 "zerocopy_threshold": 0, 00:15:32.892 "tls_version": 0, 00:15:32.892 "enable_ktls": false 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "sock_impl_set_options", 00:15:32.892 "params": { 00:15:32.892 "impl_name": "posix", 00:15:32.892 "recv_buf_size": 2097152, 00:15:32.892 "send_buf_size": 2097152, 00:15:32.892 "enable_recv_pipe": true, 00:15:32.892 "enable_quickack": false, 00:15:32.892 "enable_placement_id": 0, 00:15:32.892 "enable_zerocopy_send_server": true, 00:15:32.892 "enable_zerocopy_send_client": false, 00:15:32.892 "zerocopy_threshold": 0, 00:15:32.892 "tls_version": 0, 00:15:32.892 "enable_ktls": false 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "sock_impl_set_options", 00:15:32.892 "params": { 00:15:32.892 "impl_name": "uring", 00:15:32.892 "recv_buf_size": 2097152, 00:15:32.892 "send_buf_size": 2097152, 00:15:32.892 "enable_recv_pipe": true, 00:15:32.892 "enable_quickack": false, 00:15:32.892 "enable_placement_id": 0, 00:15:32.892 "enable_zerocopy_send_server": false, 00:15:32.892 "enable_zerocopy_send_client": false, 00:15:32.892 "zerocopy_threshold": 0, 00:15:32.892 "tls_version": 0, 00:15:32.892 "enable_ktls": false 00:15:32.892 } 00:15:32.892 } 00:15:32.892 ] 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "subsystem": "vmd", 00:15:32.892 "config": [] 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "subsystem": "accel", 00:15:32.892 "config": [ 00:15:32.892 { 00:15:32.892 "method": "accel_set_options", 00:15:32.892 "params": { 00:15:32.892 "small_cache_size": 128, 00:15:32.892 "large_cache_size": 16, 00:15:32.892 "task_count": 2048, 00:15:32.892 "sequence_count": 2048, 00:15:32.892 "buf_count": 2048 00:15:32.892 } 00:15:32.892 } 00:15:32.892 ] 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "subsystem": "bdev", 00:15:32.892 "config": [ 00:15:32.892 { 00:15:32.892 "method": "bdev_set_options", 00:15:32.892 "params": { 00:15:32.892 "bdev_io_pool_size": 65535, 00:15:32.892 "bdev_io_cache_size": 256, 00:15:32.892 "bdev_auto_examine": true, 00:15:32.892 "iobuf_small_cache_size": 128, 00:15:32.892 "iobuf_large_cache_size": 16 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_raid_set_options", 00:15:32.892 "params": { 00:15:32.892 "process_window_size_kb": 1024, 00:15:32.892 "process_max_bandwidth_mb_sec": 0 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_iscsi_set_options", 00:15:32.892 "params": { 00:15:32.892 "timeout_sec": 30 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_nvme_set_options", 00:15:32.892 "params": { 00:15:32.892 "action_on_timeout": "none", 00:15:32.892 "timeout_us": 0, 00:15:32.892 "timeout_admin_us": 0, 00:15:32.892 "keep_alive_timeout_ms": 10000, 00:15:32.892 "arbitration_burst": 0, 00:15:32.892 "low_priority_weight": 0, 00:15:32.892 "medium_priority_weight": 0, 00:15:32.892 "high_priority_weight": 0, 00:15:32.892 "nvme_adminq_poll_period_us": 10000, 00:15:32.892 "nvme_ioq_poll_period_us": 0, 00:15:32.892 "io_queue_requests": 512, 00:15:32.892 "delay_cmd_submit": true, 00:15:32.892 "transport_retry_count": 4, 00:15:32.892 "bdev_retry_count": 3, 00:15:32.892 "transport_ack_timeout": 0, 00:15:32.892 "ctrlr_loss_timeout_sec": 0, 00:15:32.892 "reconnect_delay_sec": 0, 00:15:32.892 "fast_io_fail_timeout_sec": 0, 00:15:32.892 "disable_auto_failback": false, 00:15:32.892 "generate_uuids": false, 00:15:32.892 "transport_tos": 0, 00:15:32.892 "nvme_error_stat": false, 00:15:32.892 "rdma_srq_size": 0, 00:15:32.892 "io_path_stat": false, 00:15:32.892 "allow_accel_sequence": false, 00:15:32.892 "rdma_max_cq_size": 0, 00:15:32.892 "rdma_cm_event_timeout_ms": 0, 00:15:32.892 "dhchap_digests": [ 00:15:32.892 "sha256", 00:15:32.892 "sha384", 00:15:32.892 "sha512" 00:15:32.892 ], 00:15:32.892 "dhchap_dhgroups": [ 00:15:32.892 "null", 00:15:32.892 "ffdhe2048", 00:15:32.892 "ffdhe3072", 00:15:32.892 "ffdhe4096", 00:15:32.892 "ffdhe6144", 00:15:32.892 "ffdhe8192" 00:15:32.892 ], 00:15:32.892 "rdma_umr_per_io": false 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_nvme_attach_controller", 00:15:32.892 "params": { 00:15:32.892 "name": "nvme0", 00:15:32.892 "trtype": "TCP", 00:15:32.892 "adrfam": "IPv4", 00:15:32.892 "traddr": "10.0.0.3", 00:15:32.892 "trsvcid": "4420", 00:15:32.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.892 "prchk_reftag": false, 00:15:32.892 "prchk_guard": false, 00:15:32.892 "ctrlr_loss_timeout_sec": 0, 00:15:32.892 "reconnect_delay_sec": 0, 00:15:32.892 "fast_io_fail_timeout_sec": 0, 00:15:32.892 "psk": "key0", 00:15:32.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.892 "hdgst": false, 00:15:32.892 "ddgst": false, 00:15:32.892 "multipath": "multipath" 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_nvme_set_hotplug", 00:15:32.892 "params": { 00:15:32.892 "period_us": 100000, 00:15:32.892 "enable": false 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_enable_histogram", 00:15:32.892 "params": { 00:15:32.892 "name": "nvme0n1", 00:15:32.892 "enable": true 00:15:32.892 } 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "method": "bdev_wait_for_examine" 00:15:32.892 } 00:15:32.892 ] 00:15:32.892 }, 00:15:32.892 { 00:15:32.892 "subsystem": "nbd", 00:15:32.892 "config": [] 00:15:32.892 } 00:15:32.892 ] 00:15:32.892 }' 00:15:32.892 [2024-12-16 14:31:24.931908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:32.892 [2024-12-16 14:31:24.932189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85996 ] 00:15:32.892 [2024-12-16 14:31:25.070199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.151 [2024-12-16 14:31:25.092224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.151 [2024-12-16 14:31:25.200387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.151 [2024-12-16 14:31:25.229326] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.717 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.717 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.717 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:33.717 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.284 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.284 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.284 Running I/O for 1 seconds... 00:15:35.220 4608.00 IOPS, 18.00 MiB/s 00:15:35.220 Latency(us) 00:15:35.220 [2024-12-16T14:31:27.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.220 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:35.220 Verification LBA range: start 0x0 length 0x2000 00:15:35.220 nvme0n1 : 1.02 4627.99 18.08 0.00 0.00 27387.77 9353.77 20256.58 00:15:35.220 [2024-12-16T14:31:27.420Z] =================================================================================================================== 00:15:35.220 [2024-12-16T14:31:27.420Z] Total : 4627.99 18.08 0.00 0.00 27387.77 9353.77 20256.58 00:15:35.220 { 00:15:35.220 "results": [ 00:15:35.220 { 00:15:35.220 "job": "nvme0n1", 00:15:35.220 "core_mask": "0x2", 00:15:35.220 "workload": "verify", 00:15:35.220 "status": "finished", 00:15:35.220 "verify_range": { 00:15:35.220 "start": 0, 00:15:35.220 "length": 8192 00:15:35.220 }, 00:15:35.220 "queue_depth": 128, 00:15:35.220 "io_size": 4096, 00:15:35.220 "runtime": 1.023339, 00:15:35.220 "iops": 4627.987402024158, 00:15:35.220 "mibps": 18.078075789156866, 00:15:35.220 "io_failed": 0, 00:15:35.220 "io_timeout": 0, 00:15:35.220 "avg_latency_us": 27387.77316953317, 00:15:35.220 "min_latency_us": 9353.774545454546, 00:15:35.220 "max_latency_us": 20256.581818181818 00:15:35.220 } 00:15:35.220 ], 00:15:35.220 "core_count": 1 00:15:35.220 } 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:35.220 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:35.220 nvmf_trace.0 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85996 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85996 ']' 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85996 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85996 00:15:35.479 killing process with pid 85996 00:15:35.479 Received shutdown signal, test time was about 1.000000 seconds 00:15:35.479 00:15:35.479 Latency(us) 00:15:35.479 [2024-12-16T14:31:27.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.479 [2024-12-16T14:31:27.679Z] =================================================================================================================== 00:15:35.479 [2024-12-16T14:31:27.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85996' 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85996 00:15:35.479 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85996 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.480 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.480 rmmod nvme_tcp 00:15:35.480 rmmod nvme_fabrics 00:15:35.739 rmmod nvme_keyring 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 85964 ']' 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 85964 00:15:35.739 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85964 ']' 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85964 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85964 00:15:35.740 killing process with pid 85964 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85964' 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85964 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85964 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:35.740 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:35.999 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:35.999 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.D9ylwxeOk7 /tmp/tmp.txNhJ3MWbg /tmp/tmp.zEXsJxKXHj 00:15:35.999 00:15:35.999 real 1m20.021s 00:15:35.999 user 2m10.019s 00:15:35.999 sys 0m25.973s 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.999 ************************************ 00:15:35.999 END TEST nvmf_tls 00:15:35.999 ************************************ 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.999 ************************************ 00:15:35.999 START TEST nvmf_fips 00:15:35.999 ************************************ 00:15:35.999 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:36.260 * Looking for test storage... 00:15:36.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.260 --rc genhtml_branch_coverage=1 00:15:36.260 --rc genhtml_function_coverage=1 00:15:36.260 --rc genhtml_legend=1 00:15:36.260 --rc geninfo_all_blocks=1 00:15:36.260 --rc geninfo_unexecuted_blocks=1 00:15:36.260 00:15:36.260 ' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.260 --rc genhtml_branch_coverage=1 00:15:36.260 --rc genhtml_function_coverage=1 00:15:36.260 --rc genhtml_legend=1 00:15:36.260 --rc geninfo_all_blocks=1 00:15:36.260 --rc geninfo_unexecuted_blocks=1 00:15:36.260 00:15:36.260 ' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.260 --rc genhtml_branch_coverage=1 00:15:36.260 --rc genhtml_function_coverage=1 00:15:36.260 --rc genhtml_legend=1 00:15:36.260 --rc geninfo_all_blocks=1 00:15:36.260 --rc geninfo_unexecuted_blocks=1 00:15:36.260 00:15:36.260 ' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:36.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.260 --rc genhtml_branch_coverage=1 00:15:36.260 --rc genhtml_function_coverage=1 00:15:36.260 --rc genhtml_legend=1 00:15:36.260 --rc geninfo_all_blocks=1 00:15:36.260 --rc geninfo_unexecuted_blocks=1 00:15:36.260 00:15:36.260 ' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:36.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:36.260 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:36.261 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:36.521 Error setting digest 00:15:36.521 40E2CD69D47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:36.521 40E2CD69D47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:36.521 Cannot find device "nvmf_init_br" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:36.521 Cannot find device "nvmf_init_br2" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:36.521 Cannot find device "nvmf_tgt_br" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.521 Cannot find device "nvmf_tgt_br2" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:36.521 Cannot find device "nvmf_init_br" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:36.521 Cannot find device "nvmf_init_br2" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:36.521 Cannot find device "nvmf_tgt_br" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:36.521 Cannot find device "nvmf_tgt_br2" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:36.521 Cannot find device "nvmf_br" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:36.521 Cannot find device "nvmf_init_if" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:36.521 Cannot find device "nvmf_init_if2" 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:36.521 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.522 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:36.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:36.781 00:15:36.781 --- 10.0.0.3 ping statistics --- 00:15:36.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.781 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:36.781 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:36.781 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:36.781 00:15:36.781 --- 10.0.0.4 ping statistics --- 00:15:36.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.781 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:36.781 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:36.781 00:15:36.781 --- 10.0.0.1 ping statistics --- 00:15:36.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.781 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:36.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:36.782 00:15:36.782 --- 10.0.0.2 ping statistics --- 00:15:36.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.782 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=86327 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 86327 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86327 ']' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.782 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:37.041 [2024-12-16 14:31:28.997412] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:37.041 [2024-12-16 14:31:28.997522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.041 [2024-12-16 14:31:29.145634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.041 [2024-12-16 14:31:29.170016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.041 [2024-12-16 14:31:29.170074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.041 [2024-12-16 14:31:29.170089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.041 [2024-12-16 14:31:29.170099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.041 [2024-12-16 14:31:29.170108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.041 [2024-12-16 14:31:29.170487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.041 [2024-12-16 14:31:29.207067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.UYE 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.UYE 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.UYE 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.UYE 00:15:37.328 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.591 [2024-12-16 14:31:29.592328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.591 [2024-12-16 14:31:29.608288] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.591 [2024-12-16 14:31:29.608510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.591 malloc0 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=86356 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 86356 /var/tmp/bdevperf.sock 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 86356 ']' 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.591 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:37.591 [2024-12-16 14:31:29.753755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:37.591 [2024-12-16 14:31:29.753849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86356 ] 00:15:37.850 [2024-12-16 14:31:29.898590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.850 [2024-12-16 14:31:29.918354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.850 [2024-12-16 14:31:29.946584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.850 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.850 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:37.850 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.UYE 00:15:38.109 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:38.368 [2024-12-16 14:31:30.542508] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.627 TLSTESTn1 00:15:38.627 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.627 Running I/O for 10 seconds... 00:15:40.940 4632.00 IOPS, 18.09 MiB/s [2024-12-16T14:31:34.077Z] 4651.50 IOPS, 18.17 MiB/s [2024-12-16T14:31:35.013Z] 4609.00 IOPS, 18.00 MiB/s [2024-12-16T14:31:35.949Z] 4618.75 IOPS, 18.04 MiB/s [2024-12-16T14:31:36.885Z] 4612.40 IOPS, 18.02 MiB/s [2024-12-16T14:31:37.821Z] 4620.17 IOPS, 18.05 MiB/s [2024-12-16T14:31:38.757Z] 4617.86 IOPS, 18.04 MiB/s [2024-12-16T14:31:40.134Z] 4613.12 IOPS, 18.02 MiB/s [2024-12-16T14:31:41.070Z] 4604.56 IOPS, 17.99 MiB/s [2024-12-16T14:31:41.070Z] 4588.20 IOPS, 17.92 MiB/s 00:15:48.870 Latency(us) 00:15:48.870 [2024-12-16T14:31:41.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.870 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:48.870 Verification LBA range: start 0x0 length 0x2000 00:15:48.870 TLSTESTn1 : 10.01 4594.21 17.95 0.00 0.00 27812.49 4736.47 21567.30 00:15:48.870 [2024-12-16T14:31:41.070Z] =================================================================================================================== 00:15:48.870 [2024-12-16T14:31:41.070Z] Total : 4594.21 17.95 0.00 0.00 27812.49 4736.47 21567.30 00:15:48.870 { 00:15:48.870 "results": [ 00:15:48.870 { 00:15:48.870 "job": "TLSTESTn1", 00:15:48.870 "core_mask": "0x4", 00:15:48.870 "workload": "verify", 00:15:48.870 "status": "finished", 00:15:48.870 "verify_range": { 00:15:48.870 "start": 0, 00:15:48.870 "length": 8192 00:15:48.870 }, 00:15:48.870 "queue_depth": 128, 00:15:48.870 "io_size": 4096, 00:15:48.870 "runtime": 10.014552, 00:15:48.870 "iops": 4594.214499060967, 00:15:48.870 "mibps": 17.9461503869569, 00:15:48.870 "io_failed": 0, 00:15:48.870 "io_timeout": 0, 00:15:48.870 "avg_latency_us": 27812.488028784883, 00:15:48.871 "min_latency_us": 4736.465454545454, 00:15:48.871 "max_latency_us": 21567.30181818182 00:15:48.871 } 00:15:48.871 ], 00:15:48.871 "core_count": 1 00:15:48.871 } 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:48.871 nvmf_trace.0 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86356 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86356 ']' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86356 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86356 00:15:48.871 killing process with pid 86356 00:15:48.871 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.871 00:15:48.871 Latency(us) 00:15:48.871 [2024-12-16T14:31:41.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.871 [2024-12-16T14:31:41.071Z] =================================================================================================================== 00:15:48.871 [2024-12-16T14:31:41.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86356' 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86356 00:15:48.871 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86356 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.871 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.130 rmmod nvme_tcp 00:15:49.130 rmmod nvme_fabrics 00:15:49.130 rmmod nvme_keyring 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 86327 ']' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 86327 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 86327 ']' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 86327 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86327 00:15:49.130 killing process with pid 86327 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86327' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 86327 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 86327 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:49.130 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:49.131 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.UYE 00:15:49.390 00:15:49.390 real 0m13.364s 00:15:49.390 user 0m18.403s 00:15:49.390 sys 0m5.447s 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 ************************************ 00:15:49.390 END TEST nvmf_fips 00:15:49.390 ************************************ 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 ************************************ 00:15:49.390 START TEST nvmf_control_msg_list 00:15:49.390 ************************************ 00:15:49.390 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:49.650 * Looking for test storage... 00:15:49.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.650 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.651 --rc genhtml_branch_coverage=1 00:15:49.651 --rc genhtml_function_coverage=1 00:15:49.651 --rc genhtml_legend=1 00:15:49.651 --rc geninfo_all_blocks=1 00:15:49.651 --rc geninfo_unexecuted_blocks=1 00:15:49.651 00:15:49.651 ' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.651 --rc genhtml_branch_coverage=1 00:15:49.651 --rc genhtml_function_coverage=1 00:15:49.651 --rc genhtml_legend=1 00:15:49.651 --rc geninfo_all_blocks=1 00:15:49.651 --rc geninfo_unexecuted_blocks=1 00:15:49.651 00:15:49.651 ' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.651 --rc genhtml_branch_coverage=1 00:15:49.651 --rc genhtml_function_coverage=1 00:15:49.651 --rc genhtml_legend=1 00:15:49.651 --rc geninfo_all_blocks=1 00:15:49.651 --rc geninfo_unexecuted_blocks=1 00:15:49.651 00:15:49.651 ' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.651 --rc genhtml_branch_coverage=1 00:15:49.651 --rc genhtml_function_coverage=1 00:15:49.651 --rc genhtml_legend=1 00:15:49.651 --rc geninfo_all_blocks=1 00:15:49.651 --rc geninfo_unexecuted_blocks=1 00:15:49.651 00:15:49.651 ' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.651 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:49.651 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:49.652 Cannot find device "nvmf_init_br" 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:49.652 Cannot find device "nvmf_init_br2" 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:49.652 Cannot find device "nvmf_tgt_br" 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.652 Cannot find device "nvmf_tgt_br2" 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:49.652 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:49.911 Cannot find device "nvmf_init_br" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:49.911 Cannot find device "nvmf_init_br2" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:49.911 Cannot find device "nvmf_tgt_br" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:49.911 Cannot find device "nvmf_tgt_br2" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:49.911 Cannot find device "nvmf_br" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:49.911 Cannot find device "nvmf_init_if" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.911 Cannot find device "nvmf_init_if2" 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.911 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.911 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.170 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.170 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:50.171 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.171 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:50.171 00:15:50.171 --- 10.0.0.3 ping statistics --- 00:15:50.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.171 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:50.171 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:50.171 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:50.171 00:15:50.171 --- 10.0.0.4 ping statistics --- 00:15:50.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.171 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:15:50.171 00:15:50.171 --- 10.0.0.1 ping statistics --- 00:15:50.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.171 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:50.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:50.171 00:15:50.171 --- 10.0.0.2 ping statistics --- 00:15:50.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.171 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=86741 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 86741 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 86741 ']' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.171 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.171 [2024-12-16 14:31:42.249614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:50.171 [2024-12-16 14:31:42.249703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.430 [2024-12-16 14:31:42.402979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.430 [2024-12-16 14:31:42.425919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.430 [2024-12-16 14:31:42.425975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.430 [2024-12-16 14:31:42.425989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.430 [2024-12-16 14:31:42.425999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.430 [2024-12-16 14:31:42.426008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.430 [2024-12-16 14:31:42.426339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.430 [2024-12-16 14:31:42.459406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.430 [2024-12-16 14:31:42.554700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.430 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.431 Malloc0 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.431 [2024-12-16 14:31:42.590260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=86770 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=86771 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=86772 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.431 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 86770 00:15:50.689 [2024-12-16 14:31:42.778826] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.689 [2024-12-16 14:31:42.779452] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.690 [2024-12-16 14:31:42.788833] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.631 Initializing NVMe Controllers 00:15:51.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:51.631 Initialization complete. Launching workers. 00:15:51.631 ======================================================== 00:15:51.631 Latency(us) 00:15:51.631 Device Information : IOPS MiB/s Average min max 00:15:51.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3640.97 14.22 274.26 178.66 486.37 00:15:51.631 ======================================================== 00:15:51.631 Total : 3640.97 14.22 274.26 178.66 486.37 00:15:51.631 00:15:51.631 Initializing NVMe Controllers 00:15:51.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:51.631 Initialization complete. Launching workers. 00:15:51.631 ======================================================== 00:15:51.631 Latency(us) 00:15:51.631 Device Information : IOPS MiB/s Average min max 00:15:51.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3644.00 14.23 274.06 187.74 717.34 00:15:51.631 ======================================================== 00:15:51.631 Total : 3644.00 14.23 274.06 187.74 717.34 00:15:51.631 00:15:51.631 Initializing NVMe Controllers 00:15:51.631 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.631 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:51.631 Initialization complete. Launching workers. 00:15:51.631 ======================================================== 00:15:51.631 Latency(us) 00:15:51.631 Device Information : IOPS MiB/s Average min max 00:15:51.631 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3666.00 14.32 272.41 116.62 907.89 00:15:51.631 ======================================================== 00:15:51.631 Total : 3666.00 14.32 272.41 116.62 907.89 00:15:51.631 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 86771 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 86772 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.631 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.894 rmmod nvme_tcp 00:15:51.894 rmmod nvme_fabrics 00:15:51.894 rmmod nvme_keyring 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 86741 ']' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 86741 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 86741 ']' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 86741 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86741 00:15:51.894 killing process with pid 86741 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86741' 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 86741 00:15:51.894 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 86741 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.154 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:52.414 00:15:52.414 real 0m2.793s 00:15:52.414 user 0m4.716s 00:15:52.414 sys 0m1.268s 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 ************************************ 00:15:52.414 END TEST nvmf_control_msg_list 00:15:52.414 ************************************ 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 ************************************ 00:15:52.414 START TEST nvmf_wait_for_buf 00:15:52.414 ************************************ 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:52.414 * Looking for test storage... 00:15:52.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.414 --rc genhtml_branch_coverage=1 00:15:52.414 --rc genhtml_function_coverage=1 00:15:52.414 --rc genhtml_legend=1 00:15:52.414 --rc geninfo_all_blocks=1 00:15:52.414 --rc geninfo_unexecuted_blocks=1 00:15:52.414 00:15:52.414 ' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.414 --rc genhtml_branch_coverage=1 00:15:52.414 --rc genhtml_function_coverage=1 00:15:52.414 --rc genhtml_legend=1 00:15:52.414 --rc geninfo_all_blocks=1 00:15:52.414 --rc geninfo_unexecuted_blocks=1 00:15:52.414 00:15:52.414 ' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.414 --rc genhtml_branch_coverage=1 00:15:52.414 --rc genhtml_function_coverage=1 00:15:52.414 --rc genhtml_legend=1 00:15:52.414 --rc geninfo_all_blocks=1 00:15:52.414 --rc geninfo_unexecuted_blocks=1 00:15:52.414 00:15:52.414 ' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:52.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.414 --rc genhtml_branch_coverage=1 00:15:52.414 --rc genhtml_function_coverage=1 00:15:52.414 --rc genhtml_legend=1 00:15:52.414 --rc geninfo_all_blocks=1 00:15:52.414 --rc geninfo_unexecuted_blocks=1 00:15:52.414 00:15:52.414 ' 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.414 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.674 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.675 Cannot find device "nvmf_init_br" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.675 Cannot find device "nvmf_init_br2" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.675 Cannot find device "nvmf_tgt_br" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.675 Cannot find device "nvmf_tgt_br2" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.675 Cannot find device "nvmf_init_br" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.675 Cannot find device "nvmf_init_br2" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.675 Cannot find device "nvmf_tgt_br" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.675 Cannot find device "nvmf_tgt_br2" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.675 Cannot find device "nvmf_br" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.675 Cannot find device "nvmf_init_if" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.675 Cannot find device "nvmf_init_if2" 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.675 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.934 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.934 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.935 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:52.935 00:15:52.935 --- 10.0.0.3 ping statistics --- 00:15:52.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.935 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.935 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.935 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:15:52.935 00:15:52.935 --- 10.0.0.4 ping statistics --- 00:15:52.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.935 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:52.935 00:15:52.935 --- 10.0.0.1 ping statistics --- 00:15:52.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.935 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:15:52.935 00:15:52.935 --- 10.0.0.2 ping statistics --- 00:15:52.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.935 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=87003 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 87003 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 87003 ']' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.935 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.935 [2024-12-16 14:31:45.101674] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:52.935 [2024-12-16 14:31:45.101787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.194 [2024-12-16 14:31:45.249214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.194 [2024-12-16 14:31:45.268031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.194 [2024-12-16 14:31:45.268101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.194 [2024-12-16 14:31:45.268126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.194 [2024-12-16 14:31:45.268144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.194 [2024-12-16 14:31:45.268150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.194 [2024-12-16 14:31:45.268472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.194 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.194 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:53.194 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.194 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.194 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.452 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.452 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:53.452 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:53.452 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 [2024-12-16 14:31:45.433258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 Malloc0 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 [2024-12-16 14:31:45.475250] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.453 [2024-12-16 14:31:45.503367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.453 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:53.712 [2024-12-16 14:31:45.699633] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:55.089 Initializing NVMe Controllers 00:15:55.089 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:55.089 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:55.089 Initialization complete. Launching workers. 00:15:55.089 ======================================================== 00:15:55.089 Latency(us) 00:15:55.089 Device Information : IOPS MiB/s Average min max 00:15:55.089 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8056.03 6971.32 14971.23 00:15:55.089 ======================================================== 00:15:55.089 Total : 500.00 62.50 8056.03 6971.32 14971.23 00:15:55.089 00:15:55.089 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:55.089 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:55.089 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.089 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.089 rmmod nvme_tcp 00:15:55.089 rmmod nvme_fabrics 00:15:55.089 rmmod nvme_keyring 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 87003 ']' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 87003 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 87003 ']' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 87003 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87003 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.089 killing process with pid 87003 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87003' 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 87003 00:15:55.089 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 87003 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.348 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:55.349 00:15:55.349 real 0m3.114s 00:15:55.349 user 0m2.515s 00:15:55.349 sys 0m0.752s 00:15:55.349 ************************************ 00:15:55.349 END TEST nvmf_wait_for_buf 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.349 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.349 ************************************ 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.608 ************************************ 00:15:55.608 START TEST nvmf_fuzz 00:15:55.608 ************************************ 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.608 * Looking for test storage... 00:15:55.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:55.608 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.868 --rc genhtml_branch_coverage=1 00:15:55.868 --rc genhtml_function_coverage=1 00:15:55.868 --rc genhtml_legend=1 00:15:55.868 --rc geninfo_all_blocks=1 00:15:55.868 --rc geninfo_unexecuted_blocks=1 00:15:55.868 00:15:55.868 ' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.868 --rc genhtml_branch_coverage=1 00:15:55.868 --rc genhtml_function_coverage=1 00:15:55.868 --rc genhtml_legend=1 00:15:55.868 --rc geninfo_all_blocks=1 00:15:55.868 --rc geninfo_unexecuted_blocks=1 00:15:55.868 00:15:55.868 ' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.868 --rc genhtml_branch_coverage=1 00:15:55.868 --rc genhtml_function_coverage=1 00:15:55.868 --rc genhtml_legend=1 00:15:55.868 --rc geninfo_all_blocks=1 00:15:55.868 --rc geninfo_unexecuted_blocks=1 00:15:55.868 00:15:55.868 ' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.868 --rc genhtml_branch_coverage=1 00:15:55.868 --rc genhtml_function_coverage=1 00:15:55.868 --rc genhtml_legend=1 00:15:55.868 --rc geninfo_all_blocks=1 00:15:55.868 --rc geninfo_unexecuted_blocks=1 00:15:55.868 00:15:55.868 ' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.868 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.869 Cannot find device "nvmf_init_br" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.869 Cannot find device "nvmf_init_br2" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.869 Cannot find device "nvmf_tgt_br" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.869 Cannot find device "nvmf_tgt_br2" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.869 Cannot find device "nvmf_init_br" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.869 Cannot find device "nvmf_init_br2" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.869 Cannot find device "nvmf_tgt_br" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.869 Cannot find device "nvmf_tgt_br2" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.869 Cannot find device "nvmf_br" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.869 Cannot find device "nvmf_init_if" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.869 Cannot find device "nvmf_init_if2" 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.869 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.869 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:56.129 00:15:56.129 --- 10.0.0.3 ping statistics --- 00:15:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.129 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.129 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.129 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:15:56.129 00:15:56.129 --- 10.0.0.4 ping statistics --- 00:15:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.129 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:56.129 00:15:56.129 --- 10.0.0.1 ping statistics --- 00:15:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.129 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:56.129 00:15:56.129 --- 10.0.0.2 ping statistics --- 00:15:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.129 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=87256 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 87256 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 87256 ']' 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.129 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.389 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 Malloc0 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:56.649 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:56.907 Shutting down the fuzz application 00:15:56.907 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:56.907 Shutting down the fuzz application 00:15:56.907 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.907 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.907 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.167 rmmod nvme_tcp 00:15:57.167 rmmod nvme_fabrics 00:15:57.167 rmmod nvme_keyring 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 87256 ']' 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 87256 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 87256 ']' 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 87256 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87256 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.167 killing process with pid 87256 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87256' 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 87256 00:15:57.167 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 87256 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.426 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.427 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:57.687 00:15:57.687 real 0m2.060s 00:15:57.687 user 0m1.724s 00:15:57.687 sys 0m0.633s 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.687 ************************************ 00:15:57.687 END TEST nvmf_fuzz 00:15:57.687 ************************************ 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.687 ************************************ 00:15:57.687 START TEST nvmf_multiconnection 00:15:57.687 ************************************ 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.687 * Looking for test storage... 00:15:57.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.687 --rc genhtml_branch_coverage=1 00:15:57.687 --rc genhtml_function_coverage=1 00:15:57.687 --rc genhtml_legend=1 00:15:57.687 --rc geninfo_all_blocks=1 00:15:57.687 --rc geninfo_unexecuted_blocks=1 00:15:57.687 00:15:57.687 ' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.687 --rc genhtml_branch_coverage=1 00:15:57.687 --rc genhtml_function_coverage=1 00:15:57.687 --rc genhtml_legend=1 00:15:57.687 --rc geninfo_all_blocks=1 00:15:57.687 --rc geninfo_unexecuted_blocks=1 00:15:57.687 00:15:57.687 ' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.687 --rc genhtml_branch_coverage=1 00:15:57.687 --rc genhtml_function_coverage=1 00:15:57.687 --rc genhtml_legend=1 00:15:57.687 --rc geninfo_all_blocks=1 00:15:57.687 --rc geninfo_unexecuted_blocks=1 00:15:57.687 00:15:57.687 ' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.687 --rc genhtml_branch_coverage=1 00:15:57.687 --rc genhtml_function_coverage=1 00:15:57.687 --rc genhtml_legend=1 00:15:57.687 --rc geninfo_all_blocks=1 00:15:57.687 --rc geninfo_unexecuted_blocks=1 00:15:57.687 00:15:57.687 ' 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:15:57.687 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.688 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.688 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.947 Cannot find device "nvmf_init_br" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.947 Cannot find device "nvmf_init_br2" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.947 Cannot find device "nvmf_tgt_br" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.947 Cannot find device "nvmf_tgt_br2" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.947 Cannot find device "nvmf_init_br" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.947 Cannot find device "nvmf_init_br2" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.947 Cannot find device "nvmf_tgt_br" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.947 Cannot find device "nvmf_tgt_br2" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.947 Cannot find device "nvmf_br" 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:57.947 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.947 Cannot find device "nvmf_init_if" 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.947 Cannot find device "nvmf_init_if2" 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.947 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:58.206 00:15:58.206 --- 10.0.0.3 ping statistics --- 00:15:58.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.206 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.206 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.206 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:58.206 00:15:58.206 --- 10.0.0.4 ping statistics --- 00:15:58.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.206 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:58.206 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:15:58.206 00:15:58.206 --- 10.0.0.1 ping statistics --- 00:15:58.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.206 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:15:58.207 00:15:58.207 --- 10.0.0.2 ping statistics --- 00:15:58.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.207 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=87486 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 87486 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 87486 ']' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.207 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.207 [2024-12-16 14:31:50.375080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:15:58.207 [2024-12-16 14:31:50.375179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.466 [2024-12-16 14:31:50.528134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.466 [2024-12-16 14:31:50.554098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.466 [2024-12-16 14:31:50.554160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.466 [2024-12-16 14:31:50.554179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.466 [2024-12-16 14:31:50.554190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.466 [2024-12-16 14:31:50.554198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.466 [2024-12-16 14:31:50.555148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.466 [2024-12-16 14:31:50.555230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.466 [2024-12-16 14:31:50.555370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.466 [2024-12-16 14:31:50.555375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.466 [2024-12-16 14:31:50.588838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.466 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.466 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:15:58.466 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.466 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.466 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 [2024-12-16 14:31:50.680646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 Malloc1 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 [2024-12-16 14:31:50.757064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 Malloc2 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 Malloc3 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.726 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 Malloc4 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 Malloc5 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.727 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 Malloc6 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 Malloc7 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 Malloc8 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:58.987 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 Malloc9 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 Malloc10 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.988 Malloc11 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.988 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:59.247 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:59.248 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.780 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:01.781 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:03.691 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.594 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:05.852 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:05.852 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.852 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.852 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:05.852 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:07.783 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.314 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.314 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.314 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:10.314 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.216 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.117 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.117 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.117 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:14.376 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.908 14:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.811 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.713 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:20.971 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:20.971 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:20.971 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.971 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:20.971 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:22.872 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:22.872 [global] 00:16:22.872 thread=1 00:16:22.872 invalidate=1 00:16:22.872 rw=read 00:16:22.872 time_based=1 00:16:22.872 runtime=10 00:16:22.872 ioengine=libaio 00:16:22.872 direct=1 00:16:22.872 bs=262144 00:16:22.872 iodepth=64 00:16:22.872 norandommap=1 00:16:22.872 numjobs=1 00:16:22.872 00:16:22.872 [job0] 00:16:22.872 filename=/dev/nvme0n1 00:16:22.872 [job1] 00:16:22.872 filename=/dev/nvme10n1 00:16:22.872 [job2] 00:16:22.872 filename=/dev/nvme1n1 00:16:22.872 [job3] 00:16:22.872 filename=/dev/nvme2n1 00:16:22.872 [job4] 00:16:22.872 filename=/dev/nvme3n1 00:16:22.872 [job5] 00:16:22.872 filename=/dev/nvme4n1 00:16:22.872 [job6] 00:16:22.872 filename=/dev/nvme5n1 00:16:23.130 [job7] 00:16:23.130 filename=/dev/nvme6n1 00:16:23.130 [job8] 00:16:23.130 filename=/dev/nvme7n1 00:16:23.130 [job9] 00:16:23.130 filename=/dev/nvme8n1 00:16:23.130 [job10] 00:16:23.130 filename=/dev/nvme9n1 00:16:23.130 Could not set queue depth (nvme0n1) 00:16:23.130 Could not set queue depth (nvme10n1) 00:16:23.130 Could not set queue depth (nvme1n1) 00:16:23.130 Could not set queue depth (nvme2n1) 00:16:23.130 Could not set queue depth (nvme3n1) 00:16:23.130 Could not set queue depth (nvme4n1) 00:16:23.130 Could not set queue depth (nvme5n1) 00:16:23.130 Could not set queue depth (nvme6n1) 00:16:23.130 Could not set queue depth (nvme7n1) 00:16:23.130 Could not set queue depth (nvme8n1) 00:16:23.130 Could not set queue depth (nvme9n1) 00:16:23.388 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.388 fio-3.35 00:16:23.388 Starting 11 threads 00:16:35.617 00:16:35.617 job0: (groupid=0, jobs=1): err= 0: pid=87939: Mon Dec 16 14:32:25 2024 00:16:35.617 read: IOPS=136, BW=34.1MiB/s (35.7MB/s)(345MiB/10119msec) 00:16:35.617 slat (usec): min=25, max=331599, avg=7251.03, stdev=22186.11 00:16:35.617 clat (msec): min=23, max=911, avg=461.57, stdev=145.12 00:16:35.617 lat (msec): min=24, max=958, avg=468.82, stdev=146.76 00:16:35.617 clat percentiles (msec): 00:16:35.617 | 1.00th=[ 118], 5.00th=[ 288], 10.00th=[ 334], 20.00th=[ 368], 00:16:35.617 | 30.00th=[ 393], 40.00th=[ 414], 50.00th=[ 443], 60.00th=[ 460], 00:16:35.617 | 70.00th=[ 485], 80.00th=[ 527], 90.00th=[ 693], 95.00th=[ 785], 00:16:35.617 | 99.00th=[ 844], 99.50th=[ 844], 99.90th=[ 911], 99.95th=[ 911], 00:16:35.617 | 99.99th=[ 911] 00:16:35.617 bw ( KiB/s): min=12288, max=41984, per=5.85%, avg=33688.05, stdev=8568.37, samples=20 00:16:35.617 iops : min= 48, max= 164, avg=131.55, stdev=33.47, samples=20 00:16:35.617 lat (msec) : 50=0.07%, 250=3.99%, 500=72.15%, 750=16.03%, 1000=7.76% 00:16:35.617 cpu : usr=0.13%, sys=0.61%, ctx=274, majf=0, minf=4097 00:16:35.617 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:16:35.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.617 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.617 issued rwts: total=1379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.617 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.617 job1: (groupid=0, jobs=1): err= 0: pid=87940: Mon Dec 16 14:32:25 2024 00:16:35.617 read: IOPS=340, BW=85.2MiB/s (89.3MB/s)(859MiB/10084msec) 00:16:35.617 slat (usec): min=20, max=58617, avg=2909.04, stdev=6795.02 00:16:35.617 clat (msec): min=16, max=276, avg=184.60, stdev=23.71 00:16:35.617 lat (msec): min=17, max=276, avg=187.51, stdev=24.03 00:16:35.617 clat percentiles (msec): 00:16:35.617 | 1.00th=[ 56], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 176], 00:16:35.617 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:16:35.617 | 70.00th=[ 194], 80.00th=[ 199], 90.00th=[ 205], 95.00th=[ 211], 00:16:35.617 | 99.00th=[ 226], 99.50th=[ 232], 99.90th=[ 268], 99.95th=[ 275], 00:16:35.617 | 99.99th=[ 275] 00:16:35.617 bw ( KiB/s): min=80384, max=90624, per=14.98%, avg=86314.50, stdev=2519.14, samples=20 00:16:35.617 iops : min= 314, max= 354, avg=337.15, stdev= 9.84, samples=20 00:16:35.617 lat (msec) : 20=0.12%, 50=0.52%, 100=0.99%, 250=98.08%, 500=0.29% 00:16:35.617 cpu : usr=0.15%, sys=1.58%, ctx=709, majf=0, minf=4097 00:16:35.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:35.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.617 issued rwts: total=3436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.617 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.617 job2: (groupid=0, jobs=1): err= 0: pid=87941: Mon Dec 16 14:32:25 2024 00:16:35.617 read: IOPS=340, BW=85.2MiB/s (89.3MB/s)(859MiB/10079msec) 00:16:35.617 slat (usec): min=20, max=67391, avg=2909.78, stdev=6781.22 00:16:35.617 clat (msec): min=18, max=287, avg=184.69, stdev=22.09 00:16:35.617 lat (msec): min=19, max=287, avg=187.60, stdev=22.33 00:16:35.617 clat percentiles (msec): 00:16:35.617 | 1.00th=[ 88], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 176], 00:16:35.617 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:16:35.617 | 70.00th=[ 194], 80.00th=[ 199], 90.00th=[ 205], 95.00th=[ 211], 00:16:35.617 | 99.00th=[ 226], 99.50th=[ 236], 99.90th=[ 271], 99.95th=[ 271], 00:16:35.617 | 99.99th=[ 288] 00:16:35.617 bw ( KiB/s): min=82432, max=91648, per=14.98%, avg=86306.45, stdev=2595.50, samples=20 00:16:35.617 iops : min= 322, max= 358, avg=337.05, stdev=10.14, samples=20 00:16:35.617 lat (msec) : 20=0.03%, 50=0.58%, 100=0.67%, 250=98.40%, 500=0.32% 00:16:35.617 cpu : usr=0.20%, sys=1.54%, ctx=703, majf=0, minf=4097 00:16:35.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:35.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.617 issued rwts: total=3434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.617 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.617 job3: (groupid=0, jobs=1): err= 0: pid=87942: Mon Dec 16 14:32:25 2024 00:16:35.617 read: IOPS=358, BW=89.7MiB/s (94.1MB/s)(904MiB/10071msec) 00:16:35.617 slat (usec): min=20, max=154770, avg=2762.24, stdev=6831.34 00:16:35.617 clat (msec): min=17, max=434, avg=175.34, stdev=33.05 00:16:35.617 lat (msec): min=18, max=448, avg=178.11, stdev=33.45 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 107], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 163], 00:16:35.618 | 30.00th=[ 167], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 176], 00:16:35.618 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 201], 00:16:35.618 | 99.00th=[ 393], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 435], 00:16:35.618 | 99.99th=[ 435] 00:16:35.618 bw ( KiB/s): min=43520, max=97792, per=15.78%, avg=90896.20, stdev=11300.12, samples=20 00:16:35.618 iops : min= 170, max= 382, avg=355.05, stdev=44.14, samples=20 00:16:35.618 lat (msec) : 20=0.11%, 100=0.58%, 250=97.48%, 500=1.83% 00:16:35.618 cpu : usr=0.14%, sys=1.69%, ctx=762, majf=0, minf=4098 00:16:35.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job4: (groupid=0, jobs=1): err= 0: pid=87943: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=93, BW=23.4MiB/s (24.5MB/s)(237MiB/10146msec) 00:16:35.618 slat (usec): min=20, max=415336, avg=10024.54, stdev=29371.97 00:16:35.618 clat (msec): min=54, max=912, avg=674.08, stdev=147.54 00:16:35.618 lat (msec): min=55, max=957, avg=684.10, stdev=148.57 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 57], 5.00th=[ 372], 10.00th=[ 527], 20.00th=[ 617], 00:16:35.618 | 30.00th=[ 642], 40.00th=[ 667], 50.00th=[ 693], 60.00th=[ 726], 00:16:35.618 | 70.00th=[ 760], 80.00th=[ 785], 90.00th=[ 810], 95.00th=[ 844], 00:16:35.618 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 911], 99.95th=[ 911], 00:16:35.618 | 99.99th=[ 911] 00:16:35.618 bw ( KiB/s): min=12312, max=30720, per=3.93%, avg=22626.50, stdev=5113.20, samples=20 00:16:35.618 iops : min= 48, max= 120, avg=88.30, stdev=19.94, samples=20 00:16:35.618 lat (msec) : 100=2.22%, 250=1.48%, 500=4.11%, 750=60.97%, 1000=31.22% 00:16:35.618 cpu : usr=0.04%, sys=0.49%, ctx=181, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job5: (groupid=0, jobs=1): err= 0: pid=87944: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=134, BW=33.7MiB/s (35.3MB/s)(340MiB/10098msec) 00:16:35.618 slat (usec): min=21, max=291738, avg=7342.32, stdev=21698.43 00:16:35.618 clat (msec): min=97, max=967, avg=467.11, stdev=139.58 00:16:35.618 lat (msec): min=165, max=967, avg=474.45, stdev=141.31 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 209], 5.00th=[ 313], 10.00th=[ 338], 20.00th=[ 372], 00:16:35.618 | 30.00th=[ 401], 40.00th=[ 418], 50.00th=[ 435], 60.00th=[ 451], 00:16:35.618 | 70.00th=[ 472], 80.00th=[ 550], 90.00th=[ 701], 95.00th=[ 776], 00:16:35.618 | 99.00th=[ 894], 99.50th=[ 911], 99.90th=[ 911], 99.95th=[ 969], 00:16:35.618 | 99.99th=[ 969] 00:16:35.618 bw ( KiB/s): min=12288, max=42496, per=5.77%, avg=33225.05, stdev=8639.12, samples=20 00:16:35.618 iops : min= 48, max= 166, avg=129.75, stdev=33.73, samples=20 00:16:35.618 lat (msec) : 100=0.07%, 250=2.20%, 500=74.72%, 750=16.46%, 1000=6.54% 00:16:35.618 cpu : usr=0.06%, sys=0.61%, ctx=270, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.4% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=1361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job6: (groupid=0, jobs=1): err= 0: pid=87945: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=120, BW=30.0MiB/s (31.5MB/s)(305MiB/10146msec) 00:16:35.618 slat (usec): min=21, max=183089, avg=8233.44, stdev=21576.75 00:16:35.618 clat (msec): min=19, max=838, avg=524.14, stdev=206.45 00:16:35.618 lat (msec): min=20, max=885, avg=532.37, stdev=209.50 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 30], 5.00th=[ 138], 10.00th=[ 207], 20.00th=[ 330], 00:16:35.618 | 30.00th=[ 409], 40.00th=[ 481], 50.00th=[ 550], 60.00th=[ 651], 00:16:35.618 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 768], 00:16:35.618 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 835], 99.95th=[ 835], 00:16:35.618 | 99.99th=[ 835] 00:16:35.618 bw ( KiB/s): min=18906, max=74752, per=5.13%, avg=29543.00, stdev=13004.56, samples=20 00:16:35.618 iops : min= 73, max= 292, avg=115.35, stdev=50.84, samples=20 00:16:35.618 lat (msec) : 20=0.08%, 50=2.05%, 100=1.56%, 250=8.78%, 500=31.20% 00:16:35.618 lat (msec) : 750=47.29%, 1000=9.03% 00:16:35.618 cpu : usr=0.05%, sys=0.60%, ctx=243, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job7: (groupid=0, jobs=1): err= 0: pid=87946: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=141, BW=35.5MiB/s (37.2MB/s)(359MiB/10123msec) 00:16:35.618 slat (usec): min=21, max=287522, avg=6995.81, stdev=19254.90 00:16:35.618 clat (msec): min=18, max=890, avg=443.14, stdev=133.69 00:16:35.618 lat (msec): min=19, max=898, avg=450.14, stdev=135.31 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 62], 5.00th=[ 317], 10.00th=[ 342], 20.00th=[ 368], 00:16:35.618 | 30.00th=[ 388], 40.00th=[ 401], 50.00th=[ 414], 60.00th=[ 422], 00:16:35.618 | 70.00th=[ 435], 80.00th=[ 489], 90.00th=[ 667], 95.00th=[ 726], 00:16:35.618 | 99.00th=[ 827], 99.50th=[ 844], 99.90th=[ 894], 99.95th=[ 894], 00:16:35.618 | 99.99th=[ 894] 00:16:35.618 bw ( KiB/s): min=10240, max=44544, per=6.10%, avg=35146.50, stdev=8577.70, samples=20 00:16:35.618 iops : min= 40, max= 174, avg=137.25, stdev=33.57, samples=20 00:16:35.618 lat (msec) : 20=0.21%, 50=0.77%, 100=0.07%, 250=2.23%, 500=77.31% 00:16:35.618 lat (msec) : 750=15.80%, 1000=3.62% 00:16:35.618 cpu : usr=0.08%, sys=0.68%, ctx=291, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job8: (groupid=0, jobs=1): err= 0: pid=87947: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=122, BW=30.5MiB/s (32.0MB/s)(310MiB/10148msec) 00:16:35.618 slat (usec): min=20, max=158152, avg=8080.39, stdev=21283.03 00:16:35.618 clat (msec): min=17, max=922, avg=515.25, stdev=223.87 00:16:35.618 lat (msec): min=17, max=922, avg=523.33, stdev=227.04 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 61], 5.00th=[ 125], 10.00th=[ 188], 20.00th=[ 321], 00:16:35.618 | 30.00th=[ 376], 40.00th=[ 430], 50.00th=[ 550], 60.00th=[ 634], 00:16:35.618 | 70.00th=[ 684], 80.00th=[ 743], 90.00th=[ 776], 95.00th=[ 810], 00:16:35.618 | 99.00th=[ 877], 99.50th=[ 919], 99.90th=[ 919], 99.95th=[ 919], 00:16:35.618 | 99.99th=[ 919] 00:16:35.618 bw ( KiB/s): min=18944, max=86188, per=5.22%, avg=30084.85, stdev=15668.20, samples=20 00:16:35.618 iops : min= 74, max= 336, avg=117.45, stdev=61.06, samples=20 00:16:35.618 lat (msec) : 20=0.32%, 50=0.16%, 100=2.26%, 250=13.08%, 500=32.53% 00:16:35.618 lat (msec) : 750=34.79%, 1000=16.87% 00:16:35.618 cpu : usr=0.07%, sys=0.58%, ctx=250, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=1239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job9: (groupid=0, jobs=1): err= 0: pid=87948: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=117, BW=29.3MiB/s (30.8MB/s)(298MiB/10150msec) 00:16:35.618 slat (usec): min=20, max=142391, avg=8160.39, stdev=21267.62 00:16:35.618 clat (msec): min=23, max=814, avg=536.23, stdev=185.19 00:16:35.618 lat (msec): min=25, max=887, avg=544.39, stdev=188.30 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 99], 5.00th=[ 224], 10.00th=[ 288], 20.00th=[ 355], 00:16:35.618 | 30.00th=[ 414], 40.00th=[ 460], 50.00th=[ 609], 60.00th=[ 659], 00:16:35.618 | 70.00th=[ 693], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 768], 00:16:35.618 | 99.00th=[ 802], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 818], 00:16:35.618 | 99.99th=[ 818] 00:16:35.618 bw ( KiB/s): min=19456, max=48128, per=5.01%, avg=28848.50, stdev=9412.62, samples=20 00:16:35.618 iops : min= 76, max= 188, avg=112.65, stdev=36.78, samples=20 00:16:35.618 lat (msec) : 50=0.59%, 100=0.67%, 250=4.79%, 500=40.22%, 750=46.26% 00:16:35.618 lat (msec) : 1000=7.47% 00:16:35.618 cpu : usr=0.08%, sys=0.58%, ctx=245, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.618 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.618 job10: (groupid=0, jobs=1): err= 0: pid=87949: Mon Dec 16 14:32:25 2024 00:16:35.618 read: IOPS=356, BW=89.1MiB/s (93.4MB/s)(897MiB/10066msec) 00:16:35.618 slat (usec): min=21, max=255849, avg=2783.80, stdev=7653.59 00:16:35.618 clat (msec): min=40, max=432, avg=176.54, stdev=37.66 00:16:35.618 lat (msec): min=40, max=484, avg=179.32, stdev=37.98 00:16:35.618 clat percentiles (msec): 00:16:35.618 | 1.00th=[ 96], 5.00th=[ 146], 10.00th=[ 157], 20.00th=[ 163], 00:16:35.618 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:16:35.618 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 203], 00:16:35.618 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 426], 99.95th=[ 435], 00:16:35.618 | 99.99th=[ 435] 00:16:35.618 bw ( KiB/s): min=43433, max=96768, per=15.66%, avg=90217.05, stdev=11860.27, samples=20 00:16:35.618 iops : min= 169, max= 378, avg=352.35, stdev=46.46, samples=20 00:16:35.618 lat (msec) : 50=0.47%, 100=0.59%, 250=95.46%, 500=3.48% 00:16:35.618 cpu : usr=0.19%, sys=1.64%, ctx=736, majf=0, minf=4097 00:16:35.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:16:35.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.619 issued rwts: total=3588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.619 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.619 00:16:35.619 Run status group 0 (all jobs): 00:16:35.619 READ: bw=563MiB/s (590MB/s), 23.4MiB/s-89.7MiB/s (24.5MB/s-94.1MB/s), io=5711MiB (5989MB), run=10066-10150msec 00:16:35.619 00:16:35.619 Disk stats (read/write): 00:16:35.619 nvme0n1: ios=2634/0, merge=0/0, ticks=1219744/0, in_queue=1219744, util=97.79% 00:16:35.619 nvme10n1: ios=6749/0, merge=0/0, ticks=1232186/0, in_queue=1232186, util=98.01% 00:16:35.619 nvme1n1: ios=6744/0, merge=0/0, ticks=1232138/0, in_queue=1232138, util=98.15% 00:16:35.619 nvme2n1: ios=7117/0, merge=0/0, ticks=1235199/0, in_queue=1235199, util=98.31% 00:16:35.619 nvme3n1: ios=1768/0, merge=0/0, ticks=1202096/0, in_queue=1202096, util=98.24% 00:16:35.619 nvme4n1: ios=2597/0, merge=0/0, ticks=1213823/0, in_queue=1213823, util=98.35% 00:16:35.619 nvme5n1: ios=2312/0, merge=0/0, ticks=1200345/0, in_queue=1200345, util=98.61% 00:16:35.619 nvme6n1: ios=2750/0, merge=0/0, ticks=1218582/0, in_queue=1218582, util=98.75% 00:16:35.619 nvme7n1: ios=2350/0, merge=0/0, ticks=1192197/0, in_queue=1192197, util=99.00% 00:16:35.619 nvme8n1: ios=2257/0, merge=0/0, ticks=1205057/0, in_queue=1205057, util=99.07% 00:16:35.619 nvme9n1: ios=7053/0, merge=0/0, ticks=1233262/0, in_queue=1233262, util=99.08% 00:16:35.619 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:35.619 [global] 00:16:35.619 thread=1 00:16:35.619 invalidate=1 00:16:35.619 rw=randwrite 00:16:35.619 time_based=1 00:16:35.619 runtime=10 00:16:35.619 ioengine=libaio 00:16:35.619 direct=1 00:16:35.619 bs=262144 00:16:35.619 iodepth=64 00:16:35.619 norandommap=1 00:16:35.619 numjobs=1 00:16:35.619 00:16:35.619 [job0] 00:16:35.619 filename=/dev/nvme0n1 00:16:35.619 [job1] 00:16:35.619 filename=/dev/nvme10n1 00:16:35.619 [job2] 00:16:35.619 filename=/dev/nvme1n1 00:16:35.619 [job3] 00:16:35.619 filename=/dev/nvme2n1 00:16:35.619 [job4] 00:16:35.619 filename=/dev/nvme3n1 00:16:35.619 [job5] 00:16:35.619 filename=/dev/nvme4n1 00:16:35.619 [job6] 00:16:35.619 filename=/dev/nvme5n1 00:16:35.619 [job7] 00:16:35.619 filename=/dev/nvme6n1 00:16:35.619 [job8] 00:16:35.619 filename=/dev/nvme7n1 00:16:35.619 [job9] 00:16:35.619 filename=/dev/nvme8n1 00:16:35.619 [job10] 00:16:35.619 filename=/dev/nvme9n1 00:16:35.619 Could not set queue depth (nvme0n1) 00:16:35.619 Could not set queue depth (nvme10n1) 00:16:35.619 Could not set queue depth (nvme1n1) 00:16:35.619 Could not set queue depth (nvme2n1) 00:16:35.619 Could not set queue depth (nvme3n1) 00:16:35.619 Could not set queue depth (nvme4n1) 00:16:35.619 Could not set queue depth (nvme5n1) 00:16:35.619 Could not set queue depth (nvme6n1) 00:16:35.619 Could not set queue depth (nvme7n1) 00:16:35.619 Could not set queue depth (nvme8n1) 00:16:35.619 Could not set queue depth (nvme9n1) 00:16:35.619 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.619 fio-3.35 00:16:35.619 Starting 11 threads 00:16:45.597 00:16:45.597 job0: (groupid=0, jobs=1): err= 0: pid=88144: Mon Dec 16 14:32:36 2024 00:16:45.597 write: IOPS=882, BW=221MiB/s (231MB/s)(2232MiB/10113msec); 0 zone resets 00:16:45.597 slat (usec): min=18, max=12483, avg=1115.15, stdev=2000.57 00:16:45.597 clat (msec): min=6, max=247, avg=71.33, stdev=23.77 00:16:45.597 lat (msec): min=6, max=247, avg=72.45, stdev=24.05 00:16:45.597 clat percentiles (msec): 00:16:45.597 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:16:45.597 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 65], 00:16:45.597 | 70.00th=[ 66], 80.00th=[ 67], 90.00th=[ 88], 95.00th=[ 144], 00:16:45.597 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 230], 99.95th=[ 239], 00:16:45.597 | 99.99th=[ 247] 00:16:45.597 bw ( KiB/s): min=110592, max=259584, per=23.90%, avg=226918.40, stdev=53407.53, samples=20 00:16:45.597 iops : min= 432, max= 1014, avg=886.40, stdev=208.62, samples=20 00:16:45.597 lat (msec) : 10=0.07%, 20=0.02%, 50=0.29%, 100=90.64%, 250=8.98% 00:16:45.597 cpu : usr=1.59%, sys=2.39%, ctx=10875, majf=0, minf=1 00:16:45.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:45.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.597 issued rwts: total=0,8927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.597 job1: (groupid=0, jobs=1): err= 0: pid=88145: Mon Dec 16 14:32:36 2024 00:16:45.597 write: IOPS=123, BW=30.8MiB/s (32.3MB/s)(316MiB/10252msec); 0 zone resets 00:16:45.597 slat (usec): min=17, max=340628, avg=7922.14, stdev=17328.20 00:16:45.597 clat (msec): min=229, max=749, avg=510.88, stdev=65.59 00:16:45.597 lat (msec): min=259, max=749, avg=518.81, stdev=64.60 00:16:45.597 clat percentiles (msec): 00:16:45.597 | 1.00th=[ 321], 5.00th=[ 456], 10.00th=[ 464], 20.00th=[ 472], 00:16:45.597 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 502], 00:16:45.597 | 70.00th=[ 506], 80.00th=[ 531], 90.00th=[ 609], 95.00th=[ 667], 00:16:45.597 | 99.00th=[ 701], 99.50th=[ 718], 99.90th=[ 751], 99.95th=[ 751], 00:16:45.597 | 99.99th=[ 751] 00:16:45.597 bw ( KiB/s): min=14336, max=34816, per=3.24%, avg=30720.00, stdev=4791.48, samples=20 00:16:45.597 iops : min= 56, max= 136, avg=120.00, stdev=18.72, samples=20 00:16:45.597 lat (msec) : 250=0.08%, 500=59.57%, 750=40.35% 00:16:45.597 cpu : usr=0.24%, sys=0.37%, ctx=684, majf=0, minf=1 00:16:45.597 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:16:45.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.597 issued rwts: total=0,1264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.597 job2: (groupid=0, jobs=1): err= 0: pid=88157: Mon Dec 16 14:32:36 2024 00:16:45.597 write: IOPS=126, BW=31.7MiB/s (33.3MB/s)(326MiB/10271msec); 0 zone resets 00:16:45.597 slat (usec): min=15, max=124182, avg=7685.28, stdev=14154.90 00:16:45.597 clat (msec): min=21, max=729, avg=496.17, stdev=79.29 00:16:45.597 lat (msec): min=21, max=729, avg=503.86, stdev=79.44 00:16:45.597 clat percentiles (msec): 00:16:45.597 | 1.00th=[ 146], 5.00th=[ 414], 10.00th=[ 460], 20.00th=[ 468], 00:16:45.597 | 30.00th=[ 485], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 498], 00:16:45.597 | 70.00th=[ 506], 80.00th=[ 514], 90.00th=[ 592], 95.00th=[ 625], 00:16:45.597 | 99.00th=[ 676], 99.50th=[ 676], 99.90th=[ 726], 99.95th=[ 726], 00:16:45.597 | 99.99th=[ 726] 00:16:45.597 bw ( KiB/s): min=22528, max=34816, per=3.34%, avg=31744.00, stdev=3151.81, samples=20 00:16:45.597 iops : min= 88, max= 136, avg=124.00, stdev=12.31, samples=20 00:16:45.597 lat (msec) : 50=0.31%, 100=0.31%, 250=1.30%, 500=58.97%, 750=39.11% 00:16:45.597 cpu : usr=0.27%, sys=0.37%, ctx=1316, majf=0, minf=1 00:16:45.597 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:16:45.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.597 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.597 issued rwts: total=0,1304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.597 job3: (groupid=0, jobs=1): err= 0: pid=88158: Mon Dec 16 14:32:36 2024 00:16:45.597 write: IOPS=1036, BW=259MiB/s (272MB/s)(2605MiB/10052msec); 0 zone resets 00:16:45.597 slat (usec): min=16, max=6613, avg=954.98, stdev=1608.69 00:16:45.597 clat (msec): min=5, max=108, avg=60.76, stdev= 5.61 00:16:45.597 lat (msec): min=5, max=108, avg=61.71, stdev= 5.50 00:16:45.597 clat percentiles (msec): 00:16:45.597 | 1.00th=[ 56], 5.00th=[ 57], 10.00th=[ 57], 20.00th=[ 58], 00:16:45.597 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 62], 00:16:45.597 | 70.00th=[ 62], 80.00th=[ 62], 90.00th=[ 63], 95.00th=[ 66], 00:16:45.597 | 99.00th=[ 87], 99.50th=[ 88], 99.90th=[ 97], 99.95th=[ 105], 00:16:45.597 | 99.99th=[ 108] 00:16:45.597 bw ( KiB/s): min=198770, max=275456, per=27.93%, avg=265119.30, stdev=16130.86, samples=20 00:16:45.597 iops : min= 776, max= 1076, avg=1035.60, stdev=63.11, samples=20 00:16:45.597 lat (msec) : 10=0.08%, 20=0.12%, 50=0.20%, 100=99.51%, 250=0.10% 00:16:45.597 cpu : usr=1.29%, sys=2.64%, ctx=11775, majf=0, minf=2 00:16:45.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:45.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.597 issued rwts: total=0,10420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.597 job4: (groupid=0, jobs=1): err= 0: pid=88159: Mon Dec 16 14:32:36 2024 00:16:45.597 write: IOPS=125, BW=31.5MiB/s (33.0MB/s)(323MiB/10251msec); 0 zone resets 00:16:45.597 slat (usec): min=17, max=240257, avg=7335.51, stdev=15460.97 00:16:45.597 clat (msec): min=14, max=708, avg=500.98, stdev=83.58 00:16:45.597 lat (msec): min=14, max=708, avg=508.32, stdev=83.43 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 20], 5.00th=[ 405], 10.00th=[ 464], 20.00th=[ 472], 00:16:45.598 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 502], 00:16:45.598 | 70.00th=[ 510], 80.00th=[ 531], 90.00th=[ 584], 95.00th=[ 642], 00:16:45.598 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 709], 99.95th=[ 709], 00:16:45.598 | 99.99th=[ 709] 00:16:45.598 bw ( KiB/s): min=20521, max=37888, per=3.31%, avg=31387.65, stdev=4015.71, samples=20 00:16:45.598 iops : min= 80, max= 148, avg=122.60, stdev=15.71, samples=20 00:16:45.598 lat (msec) : 20=1.16%, 50=0.08%, 250=0.39%, 500=55.04%, 750=43.33% 00:16:45.598 cpu : usr=0.21%, sys=0.41%, ctx=658, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,1290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job5: (groupid=0, jobs=1): err= 0: pid=88160: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=130, BW=32.7MiB/s (34.3MB/s)(335MiB/10250msec); 0 zone resets 00:16:45.598 slat (usec): min=18, max=135040, avg=6701.52, stdev=13182.57 00:16:45.598 clat (msec): min=62, max=733, avg=482.60, stdev=76.22 00:16:45.598 lat (msec): min=62, max=733, avg=489.30, stdev=76.92 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 209], 5.00th=[ 305], 10.00th=[ 409], 20.00th=[ 464], 00:16:45.598 | 30.00th=[ 472], 40.00th=[ 489], 50.00th=[ 493], 60.00th=[ 498], 00:16:45.598 | 70.00th=[ 502], 80.00th=[ 506], 90.00th=[ 558], 95.00th=[ 575], 00:16:45.598 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 735], 99.95th=[ 735], 00:16:45.598 | 99.99th=[ 735] 00:16:45.598 bw ( KiB/s): min=27648, max=43008, per=3.44%, avg=32691.20, stdev=2916.60, samples=20 00:16:45.598 iops : min= 108, max= 168, avg=127.70, stdev=11.39, samples=20 00:16:45.598 lat (msec) : 100=0.22%, 250=1.27%, 500=67.76%, 750=30.75% 00:16:45.598 cpu : usr=0.18%, sys=0.40%, ctx=1810, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,1340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job6: (groupid=0, jobs=1): err= 0: pid=88161: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=120, BW=30.2MiB/s (31.7MB/s)(310MiB/10250msec); 0 zone resets 00:16:45.598 slat (usec): min=17, max=406024, avg=8072.08, stdev=18705.86 00:16:45.598 clat (msec): min=232, max=819, avg=520.68, stdev=69.03 00:16:45.598 lat (msec): min=263, max=819, avg=528.75, stdev=67.92 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 326], 5.00th=[ 460], 10.00th=[ 468], 20.00th=[ 489], 00:16:45.598 | 30.00th=[ 493], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:16:45.598 | 70.00th=[ 527], 80.00th=[ 542], 90.00th=[ 634], 95.00th=[ 676], 00:16:45.598 | 99.00th=[ 726], 99.50th=[ 793], 99.90th=[ 818], 99.95th=[ 818], 00:16:45.598 | 99.99th=[ 818] 00:16:45.598 bw ( KiB/s): min=10260, max=34816, per=3.17%, avg=30106.60, stdev=5559.47, samples=20 00:16:45.598 iops : min= 40, max= 136, avg=117.60, stdev=21.73, samples=20 00:16:45.598 lat (msec) : 250=0.08%, 500=47.98%, 750=51.05%, 1000=0.89% 00:16:45.598 cpu : usr=0.22%, sys=0.36%, ctx=1680, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,1240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job7: (groupid=0, jobs=1): err= 0: pid=88162: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=126, BW=31.6MiB/s (33.2MB/s)(325MiB/10259msec); 0 zone resets 00:16:45.598 slat (usec): min=17, max=149045, avg=7665.30, stdev=14292.54 00:16:45.598 clat (msec): min=61, max=729, avg=497.86, stdev=75.24 00:16:45.598 lat (msec): min=61, max=729, avg=505.52, stdev=75.24 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 155], 5.00th=[ 422], 10.00th=[ 460], 20.00th=[ 468], 00:16:45.598 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 498], 00:16:45.598 | 70.00th=[ 502], 80.00th=[ 531], 90.00th=[ 584], 95.00th=[ 609], 00:16:45.598 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 726], 99.95th=[ 726], 00:16:45.598 | 99.99th=[ 726] 00:16:45.598 bw ( KiB/s): min=24576, max=34816, per=3.33%, avg=31590.40, stdev=2642.74, samples=20 00:16:45.598 iops : min= 96, max= 136, avg=123.40, stdev=10.32, samples=20 00:16:45.598 lat (msec) : 100=0.31%, 250=1.46%, 500=61.86%, 750=36.36% 00:16:45.598 cpu : usr=0.29%, sys=0.44%, ctx=1846, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,1298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job8: (groupid=0, jobs=1): err= 0: pid=88167: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=189, BW=47.3MiB/s (49.6MB/s)(478MiB/10113msec); 0 zone resets 00:16:45.598 slat (usec): min=17, max=173616, avg=4947.03, stdev=11748.57 00:16:45.598 clat (msec): min=24, max=732, avg=333.28, stdev=196.71 00:16:45.598 lat (msec): min=26, max=732, avg=338.22, stdev=199.71 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 34], 5.00th=[ 57], 10.00th=[ 95], 20.00th=[ 140], 00:16:45.598 | 30.00th=[ 146], 40.00th=[ 150], 50.00th=[ 464], 60.00th=[ 485], 00:16:45.598 | 70.00th=[ 498], 80.00th=[ 502], 90.00th=[ 531], 95.00th=[ 584], 00:16:45.598 | 99.00th=[ 701], 99.50th=[ 735], 99.90th=[ 735], 99.95th=[ 735], 00:16:45.598 | 99.99th=[ 735] 00:16:45.598 bw ( KiB/s): min=22528, max=123392, per=4.99%, avg=47360.00, stdev=33235.61, samples=20 00:16:45.598 iops : min= 88, max= 482, avg=185.00, stdev=129.83, samples=20 00:16:45.598 lat (msec) : 50=3.66%, 100=7.00%, 250=34.92%, 500=33.87%, 750=20.54% 00:16:45.598 cpu : usr=0.34%, sys=0.48%, ctx=1829, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,1913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job9: (groupid=0, jobs=1): err= 0: pid=88168: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=327, BW=81.9MiB/s (85.8MB/s)(840MiB/10259msec); 0 zone resets 00:16:45.598 slat (usec): min=19, max=143843, avg=2774.11, stdev=6652.07 00:16:45.598 clat (msec): min=40, max=742, avg=192.59, stdev=122.69 00:16:45.598 lat (msec): min=41, max=742, avg=195.36, stdev=124.11 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 58], 5.00th=[ 114], 10.00th=[ 144], 20.00th=[ 146], 00:16:45.598 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 157], 00:16:45.598 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 460], 95.00th=[ 506], 00:16:45.598 | 99.00th=[ 634], 99.50th=[ 667], 99.90th=[ 709], 99.95th=[ 743], 00:16:45.598 | 99.99th=[ 743] 00:16:45.598 bw ( KiB/s): min=30147, max=114176, per=8.89%, avg=84374.55, stdev=32860.59, samples=20 00:16:45.598 iops : min= 117, max= 446, avg=329.55, stdev=128.43, samples=20 00:16:45.598 lat (msec) : 50=0.42%, 100=3.69%, 250=83.06%, 500=7.38%, 750=5.45% 00:16:45.598 cpu : usr=0.60%, sys=1.11%, ctx=1458, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,3359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 job10: (groupid=0, jobs=1): err= 0: pid=88169: Mon Dec 16 14:32:36 2024 00:16:45.598 write: IOPS=570, BW=143MiB/s (149MB/s)(1434MiB/10057msec); 0 zone resets 00:16:45.598 slat (usec): min=16, max=11389, avg=1720.06, stdev=3252.44 00:16:45.598 clat (msec): min=5, max=163, avg=110.50, stdev=41.89 00:16:45.598 lat (msec): min=5, max=163, avg=112.22, stdev=42.46 00:16:45.598 clat percentiles (msec): 00:16:45.598 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 68], 00:16:45.598 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 113], 60.00th=[ 146], 00:16:45.598 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 159], 00:16:45.598 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 163], 00:16:45.598 | 99.99th=[ 163] 00:16:45.598 bw ( KiB/s): min=104448, max=243200, per=15.29%, avg=145177.60, stdev=57590.44, samples=20 00:16:45.598 iops : min= 408, max= 950, avg=567.10, stdev=224.96, samples=20 00:16:45.598 lat (msec) : 10=0.07%, 20=0.21%, 50=0.38%, 100=48.17%, 250=51.17% 00:16:45.598 cpu : usr=0.70%, sys=1.42%, ctx=4846, majf=0, minf=1 00:16:45.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:45.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.598 issued rwts: total=0,5734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.598 00:16:45.598 Run status group 0 (all jobs): 00:16:45.598 WRITE: bw=927MiB/s (972MB/s), 30.2MiB/s-259MiB/s (31.7MB/s-272MB/s), io=9522MiB (9985MB), run=10052-10271msec 00:16:45.598 00:16:45.598 Disk stats (read/write): 00:16:45.598 nvme0n1: ios=49/17701, merge=0/0, ticks=39/1210618, in_queue=1210657, util=97.72% 00:16:45.598 nvme10n1: ios=49/2499, merge=0/0, ticks=57/1235483, in_queue=1235540, util=97.90% 00:16:45.598 nvme1n1: ios=40/2584, merge=0/0, ticks=37/1237323, in_queue=1237360, util=98.23% 00:16:45.598 nvme2n1: ios=25/20669, merge=0/0, ticks=43/1217102, in_queue=1217145, util=98.15% 00:16:45.598 nvme3n1: ios=0/2549, merge=0/0, ticks=0/1236182, in_queue=1236182, util=97.96% 00:16:45.598 nvme4n1: ios=0/2656, merge=0/0, ticks=0/1238099, in_queue=1238099, util=98.20% 00:16:45.598 nvme5n1: ios=0/2452, merge=0/0, ticks=0/1235295, in_queue=1235295, util=98.32% 00:16:45.598 nvme6n1: ios=0/2572, merge=0/0, ticks=0/1236303, in_queue=1236303, util=98.50% 00:16:45.598 nvme7n1: ios=0/3681, merge=0/0, ticks=0/1214438, in_queue=1214438, util=98.74% 00:16:45.598 nvme8n1: ios=0/6698, merge=0/0, ticks=0/1239616, in_queue=1239616, util=98.95% 00:16:45.598 nvme9n1: ios=0/11273, merge=0/0, ticks=0/1215151, in_queue=1215151, util=98.85% 00:16:45.598 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:45.598 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:45.598 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:45.599 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:16:45.599 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:45.600 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:45.600 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:45.600 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.600 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:45.600 rmmod nvme_tcp 00:16:45.600 rmmod nvme_fabrics 00:16:45.600 rmmod nvme_keyring 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 87486 ']' 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 87486 00:16:45.859 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 87486 ']' 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 87486 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87486 00:16:45.860 killing process with pid 87486 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87486' 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 87486 00:16:45.860 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 87486 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.119 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:46.378 00:16:46.378 real 0m48.671s 00:16:46.378 user 2m46.662s 00:16:46.378 sys 0m25.306s 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.378 ************************************ 00:16:46.378 END TEST nvmf_multiconnection 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 ************************************ 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.378 ************************************ 00:16:46.378 START TEST nvmf_initiator_timeout 00:16:46.378 ************************************ 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.378 * Looking for test storage... 00:16:46.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:16:46.378 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:46.638 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:46.638 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.638 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.638 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.638 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.639 --rc genhtml_branch_coverage=1 00:16:46.639 --rc genhtml_function_coverage=1 00:16:46.639 --rc genhtml_legend=1 00:16:46.639 --rc geninfo_all_blocks=1 00:16:46.639 --rc geninfo_unexecuted_blocks=1 00:16:46.639 00:16:46.639 ' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.639 --rc genhtml_branch_coverage=1 00:16:46.639 --rc genhtml_function_coverage=1 00:16:46.639 --rc genhtml_legend=1 00:16:46.639 --rc geninfo_all_blocks=1 00:16:46.639 --rc geninfo_unexecuted_blocks=1 00:16:46.639 00:16:46.639 ' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.639 --rc genhtml_branch_coverage=1 00:16:46.639 --rc genhtml_function_coverage=1 00:16:46.639 --rc genhtml_legend=1 00:16:46.639 --rc geninfo_all_blocks=1 00:16:46.639 --rc geninfo_unexecuted_blocks=1 00:16:46.639 00:16:46.639 ' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.639 --rc genhtml_branch_coverage=1 00:16:46.639 --rc genhtml_function_coverage=1 00:16:46.639 --rc genhtml_legend=1 00:16:46.639 --rc geninfo_all_blocks=1 00:16:46.639 --rc geninfo_unexecuted_blocks=1 00:16:46.639 00:16:46.639 ' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.639 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.639 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:46.640 Cannot find device "nvmf_init_br" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:46.640 Cannot find device "nvmf_init_br2" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:46.640 Cannot find device "nvmf_tgt_br" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.640 Cannot find device "nvmf_tgt_br2" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:46.640 Cannot find device "nvmf_init_br" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:46.640 Cannot find device "nvmf_init_br2" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:46.640 Cannot find device "nvmf_tgt_br" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:46.640 Cannot find device "nvmf_tgt_br2" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:46.640 Cannot find device "nvmf_br" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:46.640 Cannot find device "nvmf_init_if" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:46.640 Cannot find device "nvmf_init_if2" 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.640 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:46.899 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:46.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:46.899 00:16:46.899 --- 10.0.0.3 ping statistics --- 00:16:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.899 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:46.899 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:46.899 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:16:46.899 00:16:46.899 --- 10.0.0.4 ping statistics --- 00:16:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.899 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:46.899 00:16:46.899 --- 10.0.0.1 ping statistics --- 00:16:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.899 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:46.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:46.899 00:16:46.899 --- 10.0.0.2 ping statistics --- 00:16:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.899 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.899 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.900 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=88589 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 88589 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 88589 ']' 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.159 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.159 [2024-12-16 14:32:39.169700] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:47.159 [2024-12-16 14:32:39.169802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.159 [2024-12-16 14:32:39.324096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.159 [2024-12-16 14:32:39.347975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.159 [2024-12-16 14:32:39.348054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.159 [2024-12-16 14:32:39.348068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.159 [2024-12-16 14:32:39.348080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.159 [2024-12-16 14:32:39.348088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.159 [2024-12-16 14:32:39.349014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.159 [2024-12-16 14:32:39.349185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.159 [2024-12-16 14:32:39.349306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.159 [2024-12-16 14:32:39.349308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.418 [2024-12-16 14:32:39.382301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 Malloc0 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 Delay0 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 [2024-12-16 14:32:39.514829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.419 [2024-12-16 14:32:39.547364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.419 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:47.678 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.678 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.678 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.678 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.678 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88645 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:49.596 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:49.596 [global] 00:16:49.596 thread=1 00:16:49.596 invalidate=1 00:16:49.596 rw=write 00:16:49.596 time_based=1 00:16:49.596 runtime=60 00:16:49.596 ioengine=libaio 00:16:49.596 direct=1 00:16:49.596 bs=4096 00:16:49.596 iodepth=1 00:16:49.596 norandommap=0 00:16:49.596 numjobs=1 00:16:49.596 00:16:49.596 verify_dump=1 00:16:49.596 verify_backlog=512 00:16:49.596 verify_state_save=0 00:16:49.596 do_verify=1 00:16:49.596 verify=crc32c-intel 00:16:49.596 [job0] 00:16:49.596 filename=/dev/nvme0n1 00:16:49.596 Could not set queue depth (nvme0n1) 00:16:49.855 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.855 fio-3.35 00:16:49.855 Starting 1 thread 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 true 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 true 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 true 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.143 true 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.143 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.677 true 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.677 true 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:55.677 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.678 true 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.678 true 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:55.678 14:32:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88645 00:17:51.908 00:17:51.908 job0: (groupid=0, jobs=1): err= 0: pid=88672: Mon Dec 16 14:33:41 2024 00:17:51.908 read: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec) 00:17:51.908 slat (usec): min=10, max=11807, avg=13.87, stdev=67.10 00:17:51.908 clat (usec): min=152, max=40475k, avg=995.14, stdev=179776.34 00:17:51.908 lat (usec): min=163, max=40475k, avg=1009.01, stdev=179776.35 00:17:51.908 clat percentiles (usec): 00:17:51.908 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:17:51.908 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:17:51.908 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 237], 00:17:51.908 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 367], 00:17:51.908 | 99.99th=[ 693] 00:17:51.908 write: IOPS=846, BW=3387KiB/s (3468kB/s)(198MiB/60000msec); 0 zone resets 00:17:51.908 slat (usec): min=12, max=550, avg=19.34, stdev= 6.78 00:17:51.908 clat (usec): min=3, max=7525, avg=151.85, stdev=41.56 00:17:51.908 lat (usec): min=129, max=7545, avg=171.19, stdev=42.49 00:17:51.908 clat percentiles (usec): 00:17:51.908 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 135], 00:17:51.908 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:17:51.908 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 188], 00:17:51.908 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 260], 99.95th=[ 293], 00:17:51.908 | 99.99th=[ 660] 00:17:51.908 bw ( KiB/s): min= 4528, max=12288, per=100.00%, avg=10187.49, stdev=1831.15, samples=39 00:17:51.908 iops : min= 1132, max= 3072, avg=2546.87, stdev=457.79, samples=39 00:17:51.908 lat (usec) : 4=0.01%, 250=99.03%, 500=0.95%, 750=0.01%, 1000=0.01% 00:17:51.908 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:17:51.908 cpu : usr=0.59%, sys=2.19%, ctx=101505, majf=0, minf=5 00:17:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.908 issued rwts: total=50688,50804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.908 00:17:51.908 Run status group 0 (all jobs): 00:17:51.908 READ: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:17:51.908 WRITE: bw=3387KiB/s (3468kB/s), 3387KiB/s-3387KiB/s (3468kB/s-3468kB/s), io=198MiB (208MB), run=60000-60000msec 00:17:51.908 00:17:51.908 Disk stats (read/write): 00:17:51.908 nvme0n1: ios=50647/50688, merge=0/0, ticks=10293/8092, in_queue=18385, util=99.78% 00:17:51.908 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:51.909 nvmf hotplug test: fio successful as expected 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.909 rmmod nvme_tcp 00:17:51.909 rmmod nvme_fabrics 00:17:51.909 rmmod nvme_keyring 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 88589 ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 88589 ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88589' 00:17:51.909 killing process with pid 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 88589 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:51.909 00:17:51.909 real 1m4.160s 00:17:51.909 user 3m50.743s 00:17:51.909 sys 0m21.475s 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.909 ************************************ 00:17:51.909 END TEST nvmf_initiator_timeout 00:17:51.909 ************************************ 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.909 ************************************ 00:17:51.909 START TEST nvmf_nsid 00:17:51.909 ************************************ 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:51.909 * Looking for test storage... 00:17:51.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.909 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.910 --rc genhtml_branch_coverage=1 00:17:51.910 --rc genhtml_function_coverage=1 00:17:51.910 --rc genhtml_legend=1 00:17:51.910 --rc geninfo_all_blocks=1 00:17:51.910 --rc geninfo_unexecuted_blocks=1 00:17:51.910 00:17:51.910 ' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.910 --rc genhtml_branch_coverage=1 00:17:51.910 --rc genhtml_function_coverage=1 00:17:51.910 --rc genhtml_legend=1 00:17:51.910 --rc geninfo_all_blocks=1 00:17:51.910 --rc geninfo_unexecuted_blocks=1 00:17:51.910 00:17:51.910 ' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.910 --rc genhtml_branch_coverage=1 00:17:51.910 --rc genhtml_function_coverage=1 00:17:51.910 --rc genhtml_legend=1 00:17:51.910 --rc geninfo_all_blocks=1 00:17:51.910 --rc geninfo_unexecuted_blocks=1 00:17:51.910 00:17:51.910 ' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.910 --rc genhtml_branch_coverage=1 00:17:51.910 --rc genhtml_function_coverage=1 00:17:51.910 --rc genhtml_legend=1 00:17:51.910 --rc geninfo_all_blocks=1 00:17:51.910 --rc geninfo_unexecuted_blocks=1 00:17:51.910 00:17:51.910 ' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.910 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:51.911 Cannot find device "nvmf_init_br" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:51.911 Cannot find device "nvmf_init_br2" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:51.911 Cannot find device "nvmf_tgt_br" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.911 Cannot find device "nvmf_tgt_br2" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:51.911 Cannot find device "nvmf_init_br" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:51.911 Cannot find device "nvmf_init_br2" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:51.911 Cannot find device "nvmf_tgt_br" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:51.911 Cannot find device "nvmf_tgt_br2" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:51.911 Cannot find device "nvmf_br" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:51.911 Cannot find device "nvmf_init_if" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:51.911 Cannot find device "nvmf_init_if2" 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.911 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:51.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:17:51.911 00:17:51.911 --- 10.0.0.3 ping statistics --- 00:17:51.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.911 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:51.911 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:51.911 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:51.911 00:17:51.911 --- 10.0.0.4 ping statistics --- 00:17:51.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.911 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:51.911 00:17:51.911 --- 10.0.0.1 ping statistics --- 00:17:51.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.911 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:51.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:51.911 00:17:51.911 --- 10.0.0.2 ping statistics --- 00:17:51.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.911 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:51.911 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=89546 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 89546 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 89546 ']' 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 [2024-12-16 14:33:43.297998] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:51.912 [2024-12-16 14:33:43.298087] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.912 [2024-12-16 14:33:43.448849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.912 [2024-12-16 14:33:43.473265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.912 [2024-12-16 14:33:43.473329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.912 [2024-12-16 14:33:43.473343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.912 [2024-12-16 14:33:43.473353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.912 [2024-12-16 14:33:43.473372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.912 [2024-12-16 14:33:43.473753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.912 [2024-12-16 14:33:43.508675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=89565 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5fbeff00-b5f7-414f-b0b1-381d99f7c364 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=dfa4e00b-a266-4bb6-8a36-bec149121281 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=766089c0-5036-42f3-9d89-6c5b39a145c6 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 null0 00:17:51.912 null1 00:17:51.912 null2 00:17:51.912 [2024-12-16 14:33:43.665500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.912 [2024-12-16 14:33:43.674994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:51.912 [2024-12-16 14:33:43.675078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89565 ] 00:17:51.912 [2024-12-16 14:33:43.689669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:51.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 89565 /var/tmp/tgt2.sock 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 89565 ']' 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.912 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 [2024-12-16 14:33:43.829901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.912 [2024-12-16 14:33:43.855591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.912 [2024-12-16 14:33:43.899671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.912 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.912 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:51.912 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:52.480 [2024-12-16 14:33:44.441163] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.480 [2024-12-16 14:33:44.457305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:52.480 nvme0n1 nvme0n2 00:17:52.480 nvme1n1 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:52.480 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:52.481 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:52.481 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:52.481 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5fbeff00-b5f7-414f-b0b1-381d99f7c364 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5fbeff00b5f7414fb0b1381d99f7c364 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5FBEFF00B5F7414FB0B1381D99F7C364 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5FBEFF00B5F7414FB0B1381D99F7C364 == \5\F\B\E\F\F\0\0\B\5\F\7\4\1\4\F\B\0\B\1\3\8\1\D\9\9\F\7\C\3\6\4 ]] 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid dfa4e00b-a266-4bb6-8a36-bec149121281 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dfa4e00ba2664bb68a36bec149121281 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DFA4E00BA2664BB68A36BEC149121281 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DFA4E00BA2664BB68A36BEC149121281 == \D\F\A\4\E\0\0\B\A\2\6\6\4\B\B\6\8\A\3\6\B\E\C\1\4\9\1\2\1\2\8\1 ]] 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 766089c0-5036-42f3-9d89-6c5b39a145c6 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=766089c0503642f39d896c5b39a145c6 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 766089C0503642F39D896C5B39A145C6 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 766089C0503642F39D896C5B39A145C6 == \7\6\6\0\8\9\C\0\5\0\3\6\4\2\F\3\9\D\8\9\6\C\5\B\3\9\A\1\4\5\C\6 ]] 00:17:53.860 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 89565 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 89565 ']' 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 89565 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.860 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89565 00:17:54.119 killing process with pid 89565 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89565' 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 89565 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 89565 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.119 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.379 rmmod nvme_tcp 00:17:54.379 rmmod nvme_fabrics 00:17:54.379 rmmod nvme_keyring 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 89546 ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 89546 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 89546 ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 89546 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89546 00:17:54.379 killing process with pid 89546 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89546' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 89546 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 89546 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:54.379 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.638 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:54.897 00:17:54.897 real 0m4.210s 00:17:54.897 user 0m6.275s 00:17:54.897 sys 0m1.525s 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.897 ************************************ 00:17:54.897 END TEST nvmf_nsid 00:17:54.897 ************************************ 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:54.897 00:17:54.897 real 6m51.999s 00:17:54.897 user 17m6.821s 00:17:54.897 sys 1m52.177s 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.897 14:33:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.897 ************************************ 00:17:54.897 END TEST nvmf_target_extra 00:17:54.897 ************************************ 00:17:54.897 14:33:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:54.897 14:33:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.897 14:33:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.897 14:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:54.897 ************************************ 00:17:54.897 START TEST nvmf_host 00:17:54.897 ************************************ 00:17:54.897 14:33:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:54.897 * Looking for test storage... 00:17:54.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:54.897 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.897 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.897 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.157 --rc genhtml_branch_coverage=1 00:17:55.157 --rc genhtml_function_coverage=1 00:17:55.157 --rc genhtml_legend=1 00:17:55.157 --rc geninfo_all_blocks=1 00:17:55.157 --rc geninfo_unexecuted_blocks=1 00:17:55.157 00:17:55.157 ' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.157 --rc genhtml_branch_coverage=1 00:17:55.157 --rc genhtml_function_coverage=1 00:17:55.157 --rc genhtml_legend=1 00:17:55.157 --rc geninfo_all_blocks=1 00:17:55.157 --rc geninfo_unexecuted_blocks=1 00:17:55.157 00:17:55.157 ' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.157 --rc genhtml_branch_coverage=1 00:17:55.157 --rc genhtml_function_coverage=1 00:17:55.157 --rc genhtml_legend=1 00:17:55.157 --rc geninfo_all_blocks=1 00:17:55.157 --rc geninfo_unexecuted_blocks=1 00:17:55.157 00:17:55.157 ' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.157 --rc genhtml_branch_coverage=1 00:17:55.157 --rc genhtml_function_coverage=1 00:17:55.157 --rc genhtml_legend=1 00:17:55.157 --rc geninfo_all_blocks=1 00:17:55.157 --rc geninfo_unexecuted_blocks=1 00:17:55.157 00:17:55.157 ' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.157 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.158 ************************************ 00:17:55.158 START TEST nvmf_identify 00:17:55.158 ************************************ 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:55.158 * Looking for test storage... 00:17:55.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.158 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:55.417 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:55.417 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.417 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.418 --rc genhtml_branch_coverage=1 00:17:55.418 --rc genhtml_function_coverage=1 00:17:55.418 --rc genhtml_legend=1 00:17:55.418 --rc geninfo_all_blocks=1 00:17:55.418 --rc geninfo_unexecuted_blocks=1 00:17:55.418 00:17:55.418 ' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.418 --rc genhtml_branch_coverage=1 00:17:55.418 --rc genhtml_function_coverage=1 00:17:55.418 --rc genhtml_legend=1 00:17:55.418 --rc geninfo_all_blocks=1 00:17:55.418 --rc geninfo_unexecuted_blocks=1 00:17:55.418 00:17:55.418 ' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.418 --rc genhtml_branch_coverage=1 00:17:55.418 --rc genhtml_function_coverage=1 00:17:55.418 --rc genhtml_legend=1 00:17:55.418 --rc geninfo_all_blocks=1 00:17:55.418 --rc geninfo_unexecuted_blocks=1 00:17:55.418 00:17:55.418 ' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.418 --rc genhtml_branch_coverage=1 00:17:55.418 --rc genhtml_function_coverage=1 00:17:55.418 --rc genhtml_legend=1 00:17:55.418 --rc geninfo_all_blocks=1 00:17:55.418 --rc geninfo_unexecuted_blocks=1 00:17:55.418 00:17:55.418 ' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.418 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:55.419 Cannot find device "nvmf_init_br" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:55.419 Cannot find device "nvmf_init_br2" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:55.419 Cannot find device "nvmf_tgt_br" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.419 Cannot find device "nvmf_tgt_br2" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:55.419 Cannot find device "nvmf_init_br" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:55.419 Cannot find device "nvmf_init_br2" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:55.419 Cannot find device "nvmf_tgt_br" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:55.419 Cannot find device "nvmf_tgt_br2" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:55.419 Cannot find device "nvmf_br" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:55.419 Cannot find device "nvmf_init_if" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:55.419 Cannot find device "nvmf_init_if2" 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:55.419 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:55.678 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:55.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:55.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:55.679 00:17:55.679 --- 10.0.0.3 ping statistics --- 00:17:55.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.679 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:55.679 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:55.679 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:17:55.679 00:17:55.679 --- 10.0.0.4 ping statistics --- 00:17:55.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.679 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:55.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:55.679 00:17:55.679 --- 10.0.0.1 ping statistics --- 00:17:55.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.679 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:55.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:55.679 00:17:55.679 --- 10.0.0.2 ping statistics --- 00:17:55.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.679 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=89922 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 89922 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 89922 ']' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.679 14:33:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:55.938 [2024-12-16 14:33:47.906591] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:55.938 [2024-12-16 14:33:47.906722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.938 [2024-12-16 14:33:48.060576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.938 [2024-12-16 14:33:48.086235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.938 [2024-12-16 14:33:48.086307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.938 [2024-12-16 14:33:48.086321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.938 [2024-12-16 14:33:48.086332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.938 [2024-12-16 14:33:48.086340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.938 [2024-12-16 14:33:48.087344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.938 [2024-12-16 14:33:48.087860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.938 [2024-12-16 14:33:48.088098] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.938 [2024-12-16 14:33:48.088050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.938 [2024-12-16 14:33:48.121748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 [2024-12-16 14:33:48.178295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 Malloc0 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 [2024-12-16 14:33:48.277756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.197 [ 00:17:56.197 { 00:17:56.197 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:56.197 "subtype": "Discovery", 00:17:56.197 "listen_addresses": [ 00:17:56.197 { 00:17:56.197 "trtype": "TCP", 00:17:56.197 "adrfam": "IPv4", 00:17:56.197 "traddr": "10.0.0.3", 00:17:56.197 "trsvcid": "4420" 00:17:56.197 } 00:17:56.197 ], 00:17:56.197 "allow_any_host": true, 00:17:56.197 "hosts": [] 00:17:56.197 }, 00:17:56.197 { 00:17:56.197 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.197 "subtype": "NVMe", 00:17:56.197 "listen_addresses": [ 00:17:56.197 { 00:17:56.197 "trtype": "TCP", 00:17:56.197 "adrfam": "IPv4", 00:17:56.197 "traddr": "10.0.0.3", 00:17:56.197 "trsvcid": "4420" 00:17:56.197 } 00:17:56.197 ], 00:17:56.197 "allow_any_host": true, 00:17:56.197 "hosts": [], 00:17:56.197 "serial_number": "SPDK00000000000001", 00:17:56.197 "model_number": "SPDK bdev Controller", 00:17:56.197 "max_namespaces": 32, 00:17:56.197 "min_cntlid": 1, 00:17:56.197 "max_cntlid": 65519, 00:17:56.197 "namespaces": [ 00:17:56.197 { 00:17:56.197 "nsid": 1, 00:17:56.197 "bdev_name": "Malloc0", 00:17:56.197 "name": "Malloc0", 00:17:56.197 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:56.197 "eui64": "ABCDEF0123456789", 00:17:56.197 "uuid": "594918ee-eca7-4ee9-9f53-53cbad85f318" 00:17:56.197 } 00:17:56.197 ] 00:17:56.197 } 00:17:56.197 ] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.197 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:56.197 [2024-12-16 14:33:48.336156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:56.197 [2024-12-16 14:33:48.336206] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89950 ] 00:17:56.459 [2024-12-16 14:33:48.494278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:56.459 [2024-12-16 14:33:48.494364] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:56.459 [2024-12-16 14:33:48.494372] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:56.459 [2024-12-16 14:33:48.494384] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:56.459 [2024-12-16 14:33:48.494394] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:56.459 [2024-12-16 14:33:48.494734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:56.459 [2024-12-16 14:33:48.494798] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x211ea00 0 00:17:56.459 [2024-12-16 14:33:48.501549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:56.459 [2024-12-16 14:33:48.501573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:56.459 [2024-12-16 14:33:48.501595] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:56.459 [2024-12-16 14:33:48.501598] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:56.459 [2024-12-16 14:33:48.501632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.501639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.501643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.501656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:56.459 [2024-12-16 14:33:48.501686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.509494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.509516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.509537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.509558] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:56.459 [2024-12-16 14:33:48.509566] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:56.459 [2024-12-16 14:33:48.509573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:56.459 [2024-12-16 14:33:48.509592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.509611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.459 [2024-12-16 14:33:48.509639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.509710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.509717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.509721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.509736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:56.459 [2024-12-16 14:33:48.509761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:56.459 [2024-12-16 14:33:48.509769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.509786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.459 [2024-12-16 14:33:48.509806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.509859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.509871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.509875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.509885] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:56.459 [2024-12-16 14:33:48.509895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.509902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.509919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.459 [2024-12-16 14:33:48.509937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.509979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.509987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.509991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.509995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.510001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.510012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.510029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.459 [2024-12-16 14:33:48.510047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.510094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.510101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.510105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.510115] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:56.459 [2024-12-16 14:33:48.510122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.510130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.510240] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:56.459 [2024-12-16 14:33:48.510256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.510277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.459 [2024-12-16 14:33:48.510293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.459 [2024-12-16 14:33:48.510313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.459 [2024-12-16 14:33:48.510367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.459 [2024-12-16 14:33:48.510375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.459 [2024-12-16 14:33:48.510378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.459 [2024-12-16 14:33:48.510388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:56.459 [2024-12-16 14:33:48.510398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.459 [2024-12-16 14:33:48.510407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.460 [2024-12-16 14:33:48.510432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.460 [2024-12-16 14:33:48.510479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.460 [2024-12-16 14:33:48.510487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.460 [2024-12-16 14:33:48.510491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.460 [2024-12-16 14:33:48.510501] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:56.460 [2024-12-16 14:33:48.510506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.510515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:56.460 [2024-12-16 14:33:48.510525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.510536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.460 [2024-12-16 14:33:48.510569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.460 [2024-12-16 14:33:48.510675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.460 [2024-12-16 14:33:48.510682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.460 [2024-12-16 14:33:48.510686] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510690] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x211ea00): datao=0, datal=4096, cccid=0 00:17:56.460 [2024-12-16 14:33:48.510695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21576c0) on tqpair(0x211ea00): expected_datao=0, payload_size=4096 00:17:56.460 [2024-12-16 14:33:48.510700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510734] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510740] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.460 [2024-12-16 14:33:48.510756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.460 [2024-12-16 14:33:48.510760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.460 [2024-12-16 14:33:48.510774] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:56.460 [2024-12-16 14:33:48.510780] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:56.460 [2024-12-16 14:33:48.510785] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:56.460 [2024-12-16 14:33:48.510790] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:56.460 [2024-12-16 14:33:48.510796] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:56.460 [2024-12-16 14:33:48.510801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.510811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.510819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.460 [2024-12-16 14:33:48.510858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.460 [2024-12-16 14:33:48.510913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.460 [2024-12-16 14:33:48.510921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.460 [2024-12-16 14:33:48.510924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.460 [2024-12-16 14:33:48.510938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.460 [2024-12-16 14:33:48.510960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.460 [2024-12-16 14:33:48.510982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.510990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.510996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.460 [2024-12-16 14:33:48.511003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.511017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.460 [2024-12-16 14:33:48.511023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.511037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:56.460 [2024-12-16 14:33:48.511046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.511073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.460 [2024-12-16 14:33:48.511108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21576c0, cid 0, qid 0 00:17:56.460 [2024-12-16 14:33:48.511115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157840, cid 1, qid 0 00:17:56.460 [2024-12-16 14:33:48.511120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21579c0, cid 2, qid 0 00:17:56.460 [2024-12-16 14:33:48.511125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.460 [2024-12-16 14:33:48.511129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157cc0, cid 4, qid 0 00:17:56.460 [2024-12-16 14:33:48.511209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.460 [2024-12-16 14:33:48.511216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.460 [2024-12-16 14:33:48.511220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157cc0) on tqpair=0x211ea00 00:17:56.460 [2024-12-16 14:33:48.511230] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:56.460 [2024-12-16 14:33:48.511236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:56.460 [2024-12-16 14:33:48.511247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.511259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.460 [2024-12-16 14:33:48.511277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157cc0, cid 4, qid 0 00:17:56.460 [2024-12-16 14:33:48.511336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.460 [2024-12-16 14:33:48.511343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.460 [2024-12-16 14:33:48.511346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x211ea00): datao=0, datal=4096, cccid=4 00:17:56.460 [2024-12-16 14:33:48.511355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2157cc0) on tqpair(0x211ea00): expected_datao=0, payload_size=4096 00:17:56.460 [2024-12-16 14:33:48.511359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511367] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.460 [2024-12-16 14:33:48.511385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.460 [2024-12-16 14:33:48.511389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157cc0) on tqpair=0x211ea00 00:17:56.460 [2024-12-16 14:33:48.511406] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:56.460 [2024-12-16 14:33:48.511475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.511495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.460 [2024-12-16 14:33:48.511504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.460 [2024-12-16 14:33:48.511512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x211ea00) 00:17:56.460 [2024-12-16 14:33:48.511518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.460 [2024-12-16 14:33:48.511550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157cc0, cid 4, qid 0 00:17:56.460 [2024-12-16 14:33:48.511558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157e40, cid 5, qid 0 00:17:56.461 [2024-12-16 14:33:48.511675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.461 [2024-12-16 14:33:48.511682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.461 [2024-12-16 14:33:48.511686] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511690] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x211ea00): datao=0, datal=1024, cccid=4 00:17:56.461 [2024-12-16 14:33:48.511695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2157cc0) on tqpair(0x211ea00): expected_datao=0, payload_size=1024 00:17:56.461 [2024-12-16 14:33:48.511700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511707] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511711] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.461 [2024-12-16 14:33:48.511723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.461 [2024-12-16 14:33:48.511727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157e40) on tqpair=0x211ea00 00:17:56.461 [2024-12-16 14:33:48.511750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.461 [2024-12-16 14:33:48.511758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.461 [2024-12-16 14:33:48.511761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157cc0) on tqpair=0x211ea00 00:17:56.461 [2024-12-16 14:33:48.511785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x211ea00) 00:17:56.461 [2024-12-16 14:33:48.511799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.461 [2024-12-16 14:33:48.511824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157cc0, cid 4, qid 0 00:17:56.461 [2024-12-16 14:33:48.511908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.461 [2024-12-16 14:33:48.511921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.461 [2024-12-16 14:33:48.511925] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511929] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x211ea00): datao=0, datal=3072, cccid=4 00:17:56.461 [2024-12-16 14:33:48.511933] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2157cc0) on tqpair(0x211ea00): expected_datao=0, payload_size=3072 00:17:56.461 [2024-12-16 14:33:48.511938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511945] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511949] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.461 [2024-12-16 14:33:48.511964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.461 [2024-12-16 14:33:48.511967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157cc0) on tqpair=0x211ea00 00:17:56.461 [2024-12-16 14:33:48.511981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.511986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x211ea00) 00:17:56.461 [2024-12-16 14:33:48.511994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.461 [2024-12-16 14:33:48.512017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157cc0, cid 4, qid 0 00:17:56.461 [2024-12-16 14:33:48.512079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.461 [2024-12-16 14:33:48.512085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.461 [2024-12-16 14:33:48.512089] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.512093] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x211ea00): datao=0, datal=8, cccid=4 00:17:56.461 [2024-12-16 14:33:48.512098] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2157cc0) on tqpair(0x211ea00): expected_datao=0, payload_size=8 00:17:56.461 [2024-12-16 14:33:48.512102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.512109] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.461 [2024-12-16 14:33:48.512113] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.461 ===================================================== 00:17:56.461 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:56.461 ===================================================== 00:17:56.461 Controller Capabilities/Features 00:17:56.461 ================================ 00:17:56.461 Vendor ID: 0000 00:17:56.461 Subsystem Vendor ID: 0000 00:17:56.461 Serial Number: .................... 00:17:56.461 Model Number: ........................................ 00:17:56.461 Firmware Version: 25.01 00:17:56.461 Recommended Arb Burst: 0 00:17:56.461 IEEE OUI Identifier: 00 00 00 00:17:56.461 Multi-path I/O 00:17:56.461 May have multiple subsystem ports: No 00:17:56.461 May have multiple controllers: No 00:17:56.461 Associated with SR-IOV VF: No 00:17:56.461 Max Data Transfer Size: 131072 00:17:56.461 Max Number of Namespaces: 0 00:17:56.461 Max Number of I/O Queues: 1024 00:17:56.461 NVMe Specification Version (VS): 1.3 00:17:56.461 NVMe Specification Version (Identify): 1.3 00:17:56.461 Maximum Queue Entries: 128 00:17:56.461 Contiguous Queues Required: Yes 00:17:56.461 Arbitration Mechanisms Supported 00:17:56.461 Weighted Round Robin: Not Supported 00:17:56.461 Vendor Specific: Not Supported 00:17:56.461 Reset Timeout: 15000 ms 00:17:56.461 Doorbell Stride: 4 bytes 00:17:56.461 NVM Subsystem Reset: Not Supported 00:17:56.461 Command Sets Supported 00:17:56.461 NVM Command Set: Supported 00:17:56.461 Boot Partition: Not Supported 00:17:56.461 Memory Page Size Minimum: 4096 bytes 00:17:56.461 Memory Page Size Maximum: 4096 bytes 00:17:56.461 Persistent Memory Region: Not Supported 00:17:56.461 Optional Asynchronous Events Supported 00:17:56.461 Namespace Attribute Notices: Not Supported 00:17:56.461 Firmware Activation Notices: Not Supported 00:17:56.461 ANA Change Notices: Not Supported 00:17:56.461 PLE Aggregate Log Change Notices: Not Supported 00:17:56.461 LBA Status Info Alert Notices: Not Supported 00:17:56.461 EGE Aggregate Log Change Notices: Not Supported 00:17:56.461 Normal NVM Subsystem Shutdown event: Not Supported 00:17:56.461 Zone Descriptor Change Notices: Not Supported 00:17:56.461 Discovery Log Change Notices: Supported 00:17:56.461 Controller Attributes 00:17:56.461 128-bit Host Identifier: Not Supported 00:17:56.461 Non-Operational Permissive Mode: Not Supported 00:17:56.461 NVM Sets: Not Supported 00:17:56.461 Read Recovery Levels: Not Supported 00:17:56.461 Endurance Groups: Not Supported 00:17:56.461 Predictable Latency Mode: Not Supported 00:17:56.461 Traffic Based Keep ALive: Not Supported 00:17:56.461 Namespace Granularity: Not Supported 00:17:56.461 SQ Associations: Not Supported 00:17:56.461 UUID List: Not Supported 00:17:56.461 Multi-Domain Subsystem: Not Supported 00:17:56.461 Fixed Capacity Management: Not Supported 00:17:56.461 Variable Capacity Management: Not Supported 00:17:56.461 Delete Endurance Group: Not Supported 00:17:56.461 Delete NVM Set: Not Supported 00:17:56.461 Extended LBA Formats Supported: Not Supported 00:17:56.461 Flexible Data Placement Supported: Not Supported 00:17:56.461 00:17:56.461 Controller Memory Buffer Support 00:17:56.461 ================================ 00:17:56.461 Supported: No 00:17:56.461 00:17:56.461 Persistent Memory Region Support 00:17:56.461 ================================ 00:17:56.461 Supported: No 00:17:56.461 00:17:56.461 Admin Command Set Attributes 00:17:56.461 ============================ 00:17:56.461 Security Send/Receive: Not Supported 00:17:56.461 Format NVM: Not Supported 00:17:56.461 Firmware Activate/Download: Not Supported 00:17:56.461 Namespace Management: Not Supported 00:17:56.461 Device Self-Test: Not Supported 00:17:56.461 Directives: Not Supported 00:17:56.461 NVMe-MI: Not Supported 00:17:56.461 Virtualization Management: Not Supported 00:17:56.461 Doorbell Buffer Config: Not Supported 00:17:56.461 Get LBA Status Capability: Not Supported 00:17:56.461 Command & Feature Lockdown Capability: Not Supported 00:17:56.461 Abort Command Limit: 1 00:17:56.461 Async Event Request Limit: 4 00:17:56.461 Number of Firmware Slots: N/A 00:17:56.461 Firmware Slot 1 Read-Only: N/A 00:17:56.461 Firmware Activation Without Reset: N/A 00:17:56.461 Multiple Update Detection Support: N/A 00:17:56.461 Firmware Update Granularity: No Information Provided 00:17:56.461 Per-Namespace SMART Log: No 00:17:56.461 Asymmetric Namespace Access Log Page: Not Supported 00:17:56.461 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:56.461 Command Effects Log Page: Not Supported 00:17:56.461 Get Log Page Extended Data: Supported 00:17:56.461 Telemetry Log Pages: Not Supported 00:17:56.461 Persistent Event Log Pages: Not Supported 00:17:56.461 Supported Log Pages Log Page: May Support 00:17:56.461 Commands Supported & Effects Log Page: Not Supported 00:17:56.461 Feature Identifiers & Effects Log Page:May Support 00:17:56.461 NVMe-MI Commands & Effects Log Page: May Support 00:17:56.461 Data Area 4 for Telemetry Log: Not Supported 00:17:56.461 Error Log Page Entries Supported: 128 00:17:56.461 Keep Alive: Not Supported 00:17:56.461 00:17:56.461 NVM Command Set Attributes 00:17:56.461 ========================== 00:17:56.461 Submission Queue Entry Size 00:17:56.461 Max: 1 00:17:56.461 Min: 1 00:17:56.461 Completion Queue Entry Size 00:17:56.461 Max: 1 00:17:56.461 Min: 1 00:17:56.461 Number of Namespaces: 0 00:17:56.461 Compare Command: Not Supported 00:17:56.461 Write Uncorrectable Command: Not Supported 00:17:56.461 Dataset Management Command: Not Supported 00:17:56.462 Write Zeroes Command: Not Supported 00:17:56.462 Set Features Save Field: Not Supported 00:17:56.462 Reservations: Not Supported 00:17:56.462 Timestamp: Not Supported 00:17:56.462 Copy: Not Supported 00:17:56.462 Volatile Write Cache: Not Present 00:17:56.462 Atomic Write Unit (Normal): 1 00:17:56.462 Atomic Write Unit (PFail): 1 00:17:56.462 Atomic Compare & Write Unit: 1 00:17:56.462 Fused Compare & Write: Supported 00:17:56.462 Scatter-Gather List 00:17:56.462 SGL Command Set: Supported 00:17:56.462 SGL Keyed: Supported 00:17:56.462 SGL Bit Bucket Descriptor: Not Supported 00:17:56.462 SGL Metadata Pointer: Not Supported 00:17:56.462 Oversized SGL: Not Supported 00:17:56.462 SGL Metadata Address: Not Supported 00:17:56.462 SGL Offset: Supported 00:17:56.462 Transport SGL Data Block: Not Supported 00:17:56.462 Replay Protected Memory Block: Not Supported 00:17:56.462 00:17:56.462 Firmware Slot Information 00:17:56.462 ========================= 00:17:56.462 Active slot: 0 00:17:56.462 00:17:56.462 00:17:56.462 Error Log 00:17:56.462 ========= 00:17:56.462 00:17:56.462 Active Namespaces 00:17:56.462 ================= 00:17:56.462 Discovery Log Page 00:17:56.462 ================== 00:17:56.462 Generation Counter: 2 00:17:56.462 Number of Records: 2 00:17:56.462 Record Format: 0 00:17:56.462 00:17:56.462 Discovery Log Entry 0 00:17:56.462 ---------------------- 00:17:56.462 Transport Type: 3 (TCP) 00:17:56.462 Address Family: 1 (IPv4) 00:17:56.462 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:56.462 Entry Flags: 00:17:56.462 Duplicate Returned Information: 1 00:17:56.462 Explicit Persistent Connection Support for Discovery: 1 00:17:56.462 Transport Requirements: 00:17:56.462 Secure Channel: Not Required 00:17:56.462 Port ID: 0 (0x0000) 00:17:56.462 Controller ID: 65535 (0xffff) 00:17:56.462 Admin Max SQ Size: 128 00:17:56.462 Transport Service Identifier: 4420 00:17:56.462 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:56.462 Transport Address: 10.0.0.3 00:17:56.462 Discovery Log Entry 1 00:17:56.462 ---------------------- 00:17:56.462 Transport Type: 3 (TCP) 00:17:56.462 Address Family: 1 (IPv4) 00:17:56.462 Subsystem Type: 2 (NVM Subsystem) 00:17:56.462 Entry Flags: 00:17:56.462 Duplicate Returned Information: 0 00:17:56.462 Explicit Persistent Connection Support for Discovery: 0 00:17:56.462 Transport Requirements: 00:17:56.462 Secure Channel: Not Required 00:17:56.462 Port ID: 0 (0x0000) 00:17:56.462 Controller ID: 65535 (0xffff) 00:17:56.462 Admin Max SQ Size: 128 00:17:56.462 Transport Service Identifier: 4420 00:17:56.462 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:56.462 Transport Address: 10.0.0.3 [2024-12-16 14:33:48.512128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157cc0) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512238] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:56.462 [2024-12-16 14:33:48.512253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21576c0) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.462 [2024-12-16 14:33:48.512267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157840) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.462 [2024-12-16 14:33:48.512278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21579c0) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.462 [2024-12-16 14:33:48.512288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.462 [2024-12-16 14:33:48.512305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.512387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.512543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512563] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:56.462 [2024-12-16 14:33:48.512568] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:56.462 [2024-12-16 14:33:48.512579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.512673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.512788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.512915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.512922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.512926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.512941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.512950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.462 [2024-12-16 14:33:48.512957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.462 [2024-12-16 14:33:48.512989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.462 [2024-12-16 14:33:48.513029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.462 [2024-12-16 14:33:48.513036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.462 [2024-12-16 14:33:48.513040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.513044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.462 [2024-12-16 14:33:48.513054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.513059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.462 [2024-12-16 14:33:48.513063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.513070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.513087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.513128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.513134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.513138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.513152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.513168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.513185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.513225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.513232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.513235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.513250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.513266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.513283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.513323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.513329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.513333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.513348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.513363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.513380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.513426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.513433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.513436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.513451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.513459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.513467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.513483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.517499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.517511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.517515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.517519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.517533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.517538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.517542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x211ea00) 00:17:56.463 [2024-12-16 14:33:48.517550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.463 [2024-12-16 14:33:48.517574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2157b40, cid 3, qid 0 00:17:56.463 [2024-12-16 14:33:48.517627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.463 [2024-12-16 14:33:48.517634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.463 [2024-12-16 14:33:48.517638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.463 [2024-12-16 14:33:48.517642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2157b40) on tqpair=0x211ea00 00:17:56.463 [2024-12-16 14:33:48.517650] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:56.463 00:17:56.463 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:56.463 [2024-12-16 14:33:48.557103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:56.463 [2024-12-16 14:33:48.557151] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89952 ] 00:17:56.726 [2024-12-16 14:33:48.715176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:56.726 [2024-12-16 14:33:48.715247] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:56.726 [2024-12-16 14:33:48.715254] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:56.726 [2024-12-16 14:33:48.715266] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:56.726 [2024-12-16 14:33:48.715275] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:56.726 [2024-12-16 14:33:48.719573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:56.726 [2024-12-16 14:33:48.719640] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdc9a00 0 00:17:56.726 [2024-12-16 14:33:48.719710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:56.726 [2024-12-16 14:33:48.719719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:56.726 [2024-12-16 14:33:48.719724] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:56.726 [2024-12-16 14:33:48.719727] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:56.726 [2024-12-16 14:33:48.719761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.719768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.719772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.726 [2024-12-16 14:33:48.719786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:56.726 [2024-12-16 14:33:48.719812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.726 [2024-12-16 14:33:48.727496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.726 [2024-12-16 14:33:48.727517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.726 [2024-12-16 14:33:48.727538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.727543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.726 [2024-12-16 14:33:48.727554] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:56.726 [2024-12-16 14:33:48.727561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:56.726 [2024-12-16 14:33:48.727568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:56.726 [2024-12-16 14:33:48.727588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.727593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.727597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.726 [2024-12-16 14:33:48.727607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.726 [2024-12-16 14:33:48.727635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.726 [2024-12-16 14:33:48.727692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.726 [2024-12-16 14:33:48.727699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.726 [2024-12-16 14:33:48.727702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.726 [2024-12-16 14:33:48.727707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.727716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:56.727 [2024-12-16 14:33:48.727725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:56.727 [2024-12-16 14:33:48.727733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.727749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.727783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.727831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.727839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.727843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727847] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.727853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:56.727 [2024-12-16 14:33:48.727862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.727869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.727885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.727903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.727948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.727955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.727959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.727969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.727979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.727988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.727995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.728012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.728060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.728067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.728071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.728080] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:56.727 [2024-12-16 14:33:48.728085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.728094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.728204] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:56.727 [2024-12-16 14:33:48.728210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.728220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.728254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.728306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.728313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.728317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.728327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:56.727 [2024-12-16 14:33:48.728337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.728369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.728417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.728424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.728428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.728437] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:56.727 [2024-12-16 14:33:48.728443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:56.727 [2024-12-16 14:33:48.728451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:56.727 [2024-12-16 14:33:48.728473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:56.727 [2024-12-16 14:33:48.728486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.727 [2024-12-16 14:33:48.728520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.728613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.727 [2024-12-16 14:33:48.728620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.727 [2024-12-16 14:33:48.728624] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728628] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=4096, cccid=0 00:17:56.727 [2024-12-16 14:33:48.728634] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe026c0) on tqpair(0xdc9a00): expected_datao=0, payload_size=4096 00:17:56.727 [2024-12-16 14:33:48.728638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728647] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.728666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.728670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.728683] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:56.727 [2024-12-16 14:33:48.728688] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:56.727 [2024-12-16 14:33:48.728693] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:56.727 [2024-12-16 14:33:48.728698] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:56.727 [2024-12-16 14:33:48.728703] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:56.727 [2024-12-16 14:33:48.728708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:56.727 [2024-12-16 14:33:48.728717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:56.727 [2024-12-16 14:33:48.728725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.727 [2024-12-16 14:33:48.728761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.727 [2024-12-16 14:33:48.728809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.727 [2024-12-16 14:33:48.728816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.727 [2024-12-16 14:33:48.728820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.727 [2024-12-16 14:33:48.728832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.727 [2024-12-16 14:33:48.728854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdc9a00) 00:17:56.727 [2024-12-16 14:33:48.728868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.727 [2024-12-16 14:33:48.728875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.727 [2024-12-16 14:33:48.728879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.728882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.728888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.728 [2024-12-16 14:33:48.728895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.728899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.728902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.728908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.728 [2024-12-16 14:33:48.728914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.728927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.728935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.728939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.728946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.728 [2024-12-16 14:33:48.728966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe026c0, cid 0, qid 0 00:17:56.728 [2024-12-16 14:33:48.728973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02840, cid 1, qid 0 00:17:56.728 [2024-12-16 14:33:48.728978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe029c0, cid 2, qid 0 00:17:56.728 [2024-12-16 14:33:48.728983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.728 [2024-12-16 14:33:48.728988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.728 [2024-12-16 14:33:48.729077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.728 [2024-12-16 14:33:48.729084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.728 [2024-12-16 14:33:48.729088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.728 [2024-12-16 14:33:48.729098] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:56.728 [2024-12-16 14:33:48.729104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.729146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.728 [2024-12-16 14:33:48.729165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.728 [2024-12-16 14:33:48.729216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.728 [2024-12-16 14:33:48.729223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.728 [2024-12-16 14:33:48.729227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.728 [2024-12-16 14:33:48.729293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.729324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.728 [2024-12-16 14:33:48.729342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.728 [2024-12-16 14:33:48.729406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.728 [2024-12-16 14:33:48.729418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.728 [2024-12-16 14:33:48.729423] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729427] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=4096, cccid=4 00:17:56.728 [2024-12-16 14:33:48.729459] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02cc0) on tqpair(0xdc9a00): expected_datao=0, payload_size=4096 00:17:56.728 [2024-12-16 14:33:48.729465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729473] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729477] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.728 [2024-12-16 14:33:48.729493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.728 [2024-12-16 14:33:48.729497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.728 [2024-12-16 14:33:48.729523] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:56.728 [2024-12-16 14:33:48.729534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.729567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.728 [2024-12-16 14:33:48.729589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.728 [2024-12-16 14:33:48.729710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.728 [2024-12-16 14:33:48.729717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.728 [2024-12-16 14:33:48.729721] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729725] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=4096, cccid=4 00:17:56.728 [2024-12-16 14:33:48.729730] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02cc0) on tqpair(0xdc9a00): expected_datao=0, payload_size=4096 00:17:56.728 [2024-12-16 14:33:48.729735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729743] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.728 [2024-12-16 14:33:48.729763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.728 [2024-12-16 14:33:48.729766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.728 [2024-12-16 14:33:48.729787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.728 [2024-12-16 14:33:48.729818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.728 [2024-12-16 14:33:48.729853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.728 [2024-12-16 14:33:48.729915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.728 [2024-12-16 14:33:48.729922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.728 [2024-12-16 14:33:48.729926] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729930] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=4096, cccid=4 00:17:56.728 [2024-12-16 14:33:48.729935] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02cc0) on tqpair(0xdc9a00): expected_datao=0, payload_size=4096 00:17:56.728 [2024-12-16 14:33:48.729940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729947] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729951] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.728 [2024-12-16 14:33:48.729966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.728 [2024-12-16 14:33:48.729969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.728 [2024-12-16 14:33:48.729974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.728 [2024-12-16 14:33:48.729983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:56.728 [2024-12-16 14:33:48.729992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730031] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:56.729 [2024-12-16 14:33:48.730036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:56.729 [2024-12-16 14:33:48.730042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:56.729 [2024-12-16 14:33:48.730057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.729 [2024-12-16 14:33:48.730115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.729 [2024-12-16 14:33:48.730123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02e40, cid 5, qid 0 00:17:56.729 [2024-12-16 14:33:48.730181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.730204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02e40) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.730228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02e40, cid 5, qid 0 00:17:56.729 [2024-12-16 14:33:48.730307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02e40) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.730332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02e40, cid 5, qid 0 00:17:56.729 [2024-12-16 14:33:48.730405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02e40) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.730431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02e40, cid 5, qid 0 00:17:56.729 [2024-12-16 14:33:48.730507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02e40) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.730541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdc9a00) 00:17:56.729 [2024-12-16 14:33:48.730609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.729 [2024-12-16 14:33:48.730629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02e40, cid 5, qid 0 00:17:56.729 [2024-12-16 14:33:48.730635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02cc0, cid 4, qid 0 00:17:56.729 [2024-12-16 14:33:48.730640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02fc0, cid 6, qid 0 00:17:56.729 [2024-12-16 14:33:48.730645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe03140, cid 7, qid 0 00:17:56.729 [2024-12-16 14:33:48.730805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.729 [2024-12-16 14:33:48.730814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.729 [2024-12-16 14:33:48.730818] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730822] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=8192, cccid=5 00:17:56.729 [2024-12-16 14:33:48.730827] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02e40) on tqpair(0xdc9a00): expected_datao=0, payload_size=8192 00:17:56.729 [2024-12-16 14:33:48.730832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.729 [2024-12-16 14:33:48.730867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.729 [2024-12-16 14:33:48.730870] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730874] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=512, cccid=4 00:17:56.729 [2024-12-16 14:33:48.730879] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02cc0) on tqpair(0xdc9a00): expected_datao=0, payload_size=512 00:17:56.729 [2024-12-16 14:33:48.730884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730891] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.729 [2024-12-16 14:33:48.730907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.729 [2024-12-16 14:33:48.730910] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730914] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=512, cccid=6 00:17:56.729 [2024-12-16 14:33:48.730919] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe02fc0) on tqpair(0xdc9a00): expected_datao=0, payload_size=512 00:17:56.729 [2024-12-16 14:33:48.730924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730930] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730934] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.729 [2024-12-16 14:33:48.730946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.729 [2024-12-16 14:33:48.730950] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730954] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdc9a00): datao=0, datal=4096, cccid=7 00:17:56.729 [2024-12-16 14:33:48.730959] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe03140) on tqpair(0xdc9a00): expected_datao=0, payload_size=4096 00:17:56.729 [2024-12-16 14:33:48.730964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730971] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730975] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.730989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.730993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.730997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02e40) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.731013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.731020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.729 [2024-12-16 14:33:48.731023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.729 [2024-12-16 14:33:48.731028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02cc0) on tqpair=0xdc9a00 00:17:56.729 [2024-12-16 14:33:48.731054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.729 [2024-12-16 14:33:48.731060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.730 [2024-12-16 14:33:48.731064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.730 [2024-12-16 14:33:48.731068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02fc0) on tqpair=0xdc9a00 00:17:56.730 [2024-12-16 14:33:48.731076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.730 [2024-12-16 14:33:48.731082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.730 [2024-12-16 14:33:48.731085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.730 ===================================================== 00:17:56.730 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.730 ===================================================== 00:17:56.730 Controller Capabilities/Features 00:17:56.730 ================================ 00:17:56.730 Vendor ID: 8086 00:17:56.730 Subsystem Vendor ID: 8086 00:17:56.730 Serial Number: SPDK00000000000001 00:17:56.730 Model Number: SPDK bdev Controller 00:17:56.730 Firmware Version: 25.01 00:17:56.730 Recommended Arb Burst: 6 00:17:56.730 IEEE OUI Identifier: e4 d2 5c 00:17:56.730 Multi-path I/O 00:17:56.730 May have multiple subsystem ports: Yes 00:17:56.730 May have multiple controllers: Yes 00:17:56.730 Associated with SR-IOV VF: No 00:17:56.730 Max Data Transfer Size: 131072 00:17:56.730 Max Number of Namespaces: 32 00:17:56.730 Max Number of I/O Queues: 127 00:17:56.730 NVMe Specification Version (VS): 1.3 00:17:56.730 NVMe Specification Version (Identify): 1.3 00:17:56.730 Maximum Queue Entries: 128 00:17:56.730 Contiguous Queues Required: Yes 00:17:56.730 Arbitration Mechanisms Supported 00:17:56.730 Weighted Round Robin: Not Supported 00:17:56.730 Vendor Specific: Not Supported 00:17:56.730 Reset Timeout: 15000 ms 00:17:56.730 Doorbell Stride: 4 bytes 00:17:56.730 NVM Subsystem Reset: Not Supported 00:17:56.730 Command Sets Supported 00:17:56.730 NVM Command Set: Supported 00:17:56.730 Boot Partition: Not Supported 00:17:56.730 Memory Page Size Minimum: 4096 bytes 00:17:56.730 Memory Page Size Maximum: 4096 bytes 00:17:56.730 Persistent Memory Region: Not Supported 00:17:56.730 Optional Asynchronous Events Supported 00:17:56.730 Namespace Attribute Notices: Supported 00:17:56.730 Firmware Activation Notices: Not Supported 00:17:56.730 ANA Change Notices: Not Supported 00:17:56.730 PLE Aggregate Log Change Notices: Not Supported 00:17:56.730 LBA Status Info Alert Notices: Not Supported 00:17:56.730 EGE Aggregate Log Change Notices: Not Supported 00:17:56.730 Normal NVM Subsystem Shutdown event: Not Supported 00:17:56.730 Zone Descriptor Change Notices: Not Supported 00:17:56.730 Discovery Log Change Notices: Not Supported 00:17:56.730 Controller Attributes 00:17:56.730 128-bit Host Identifier: Supported 00:17:56.730 Non-Operational Permissive Mode: Not Supported 00:17:56.730 NVM Sets: Not Supported 00:17:56.730 Read Recovery Levels: Not Supported 00:17:56.730 Endurance Groups: Not Supported 00:17:56.730 Predictable Latency Mode: Not Supported 00:17:56.730 Traffic Based Keep ALive: Not Supported 00:17:56.730 Namespace Granularity: Not Supported 00:17:56.730 SQ Associations: Not Supported 00:17:56.730 UUID List: Not Supported 00:17:56.730 Multi-Domain Subsystem: Not Supported 00:17:56.730 Fixed Capacity Management: Not Supported 00:17:56.730 Variable Capacity Management: Not Supported 00:17:56.730 Delete Endurance Group: Not Supported 00:17:56.730 Delete NVM Set: Not Supported 00:17:56.730 Extended LBA Formats Supported: Not Supported 00:17:56.730 Flexible Data Placement Supported: Not Supported 00:17:56.730 00:17:56.730 Controller Memory Buffer Support 00:17:56.730 ================================ 00:17:56.730 Supported: No 00:17:56.730 00:17:56.730 Persistent Memory Region Support 00:17:56.730 ================================ 00:17:56.730 Supported: No 00:17:56.730 00:17:56.730 Admin Command Set Attributes 00:17:56.730 ============================ 00:17:56.730 Security Send/Receive: Not Supported 00:17:56.730 Format NVM: Not Supported 00:17:56.730 Firmware Activate/Download: Not Supported 00:17:56.730 Namespace Management: Not Supported 00:17:56.730 Device Self-Test: Not Supported 00:17:56.730 Directives: Not Supported 00:17:56.730 NVMe-MI: Not Supported 00:17:56.730 Virtualization Management: Not Supported 00:17:56.730 Doorbell Buffer Config: Not Supported 00:17:56.730 Get LBA Status Capability: Not Supported 00:17:56.730 Command & Feature Lockdown Capability: Not Supported 00:17:56.730 Abort Command Limit: 4 00:17:56.730 Async Event Request Limit: 4 00:17:56.730 Number of Firmware Slots: N/A 00:17:56.730 Firmware Slot 1 Read-Only: N/A 00:17:56.730 Firmware Activation Without Reset: N/A 00:17:56.730 Multiple Update Detection Support: N/A 00:17:56.730 Firmware Update Granularity: No Information Provided 00:17:56.730 Per-Namespace SMART Log: No 00:17:56.730 Asymmetric Namespace Access Log Page: Not Supported 00:17:56.730 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:56.730 Command Effects Log Page: Supported 00:17:56.730 Get Log Page Extended Data: Supported 00:17:56.730 Telemetry Log Pages: Not Supported 00:17:56.730 Persistent Event Log Pages: Not Supported 00:17:56.730 Supported Log Pages Log Page: May Support 00:17:56.730 Commands Supported & Effects Log Page: Not Supported 00:17:56.730 Feature Identifiers & Effects Log Page:May Support 00:17:56.730 NVMe-MI Commands & Effects Log Page: May Support 00:17:56.730 Data Area 4 for Telemetry Log: Not Supported 00:17:56.730 Error Log Page Entries Supported: 128 00:17:56.730 Keep Alive: Supported 00:17:56.730 Keep Alive Granularity: 10000 ms 00:17:56.730 00:17:56.730 NVM Command Set Attributes 00:17:56.730 ========================== 00:17:56.730 Submission Queue Entry Size 00:17:56.730 Max: 64 00:17:56.730 Min: 64 00:17:56.730 Completion Queue Entry Size 00:17:56.730 Max: 16 00:17:56.730 Min: 16 00:17:56.730 Number of Namespaces: 32 00:17:56.730 Compare Command: Supported 00:17:56.730 Write Uncorrectable Command: Not Supported 00:17:56.730 Dataset Management Command: Supported 00:17:56.730 Write Zeroes Command: Supported 00:17:56.730 Set Features Save Field: Not Supported 00:17:56.730 Reservations: Supported 00:17:56.730 Timestamp: Not Supported 00:17:56.730 Copy: Supported 00:17:56.730 Volatile Write Cache: Present 00:17:56.730 Atomic Write Unit (Normal): 1 00:17:56.730 Atomic Write Unit (PFail): 1 00:17:56.730 Atomic Compare & Write Unit: 1 00:17:56.730 Fused Compare & Write: Supported 00:17:56.730 Scatter-Gather List 00:17:56.730 SGL Command Set: Supported 00:17:56.730 SGL Keyed: Supported 00:17:56.730 SGL Bit Bucket Descriptor: Not Supported 00:17:56.730 SGL Metadata Pointer: Not Supported 00:17:56.730 Oversized SGL: Not Supported 00:17:56.730 SGL Metadata Address: Not Supported 00:17:56.730 SGL Offset: Supported 00:17:56.730 Transport SGL Data Block: Not Supported 00:17:56.730 Replay Protected Memory Block: Not Supported 00:17:56.730 00:17:56.730 Firmware Slot Information 00:17:56.730 ========================= 00:17:56.730 Active slot: 1 00:17:56.730 Slot 1 Firmware Revision: 25.01 00:17:56.730 00:17:56.730 00:17:56.730 Commands Supported and Effects 00:17:56.730 ============================== 00:17:56.730 Admin Commands 00:17:56.730 -------------- 00:17:56.730 Get Log Page (02h): Supported 00:17:56.730 Identify (06h): Supported 00:17:56.730 Abort (08h): Supported 00:17:56.730 Set Features (09h): Supported 00:17:56.730 Get Features (0Ah): Supported 00:17:56.730 Asynchronous Event Request (0Ch): Supported 00:17:56.730 Keep Alive (18h): Supported 00:17:56.730 I/O Commands 00:17:56.730 ------------ 00:17:56.730 Flush (00h): Supported LBA-Change 00:17:56.730 Write (01h): Supported LBA-Change 00:17:56.730 Read (02h): Supported 00:17:56.730 Compare (05h): Supported 00:17:56.730 Write Zeroes (08h): Supported LBA-Change 00:17:56.730 Dataset Management (09h): Supported LBA-Change 00:17:56.730 Copy (19h): Supported LBA-Change 00:17:56.730 00:17:56.730 Error Log 00:17:56.730 ========= 00:17:56.730 00:17:56.730 Arbitration 00:17:56.730 =========== 00:17:56.730 Arbitration Burst: 1 00:17:56.730 00:17:56.730 Power Management 00:17:56.730 ================ 00:17:56.730 Number of Power States: 1 00:17:56.730 Current Power State: Power State #0 00:17:56.730 Power State #0: 00:17:56.730 Max Power: 0.00 W 00:17:56.730 Non-Operational State: Operational 00:17:56.730 Entry Latency: Not Reported 00:17:56.730 Exit Latency: Not Reported 00:17:56.730 Relative Read Throughput: 0 00:17:56.730 Relative Read Latency: 0 00:17:56.730 Relative Write Throughput: 0 00:17:56.730 Relative Write Latency: 0 00:17:56.730 Idle Power: Not Reported 00:17:56.730 Active Power: Not Reported 00:17:56.730 Non-Operational Permissive Mode: Not Supported 00:17:56.730 00:17:56.730 Health Information 00:17:56.730 ================== 00:17:56.730 Critical Warnings: 00:17:56.730 Available Spare Space: OK 00:17:56.730 Temperature: OK 00:17:56.730 Device Reliability: OK 00:17:56.730 Read Only: No 00:17:56.730 Volatile Memory Backup: OK 00:17:56.730 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:56.730 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:56.730 Available Spare: 0% 00:17:56.730 Available Spare Threshold: 0% 00:17:56.731 Life Percentage Used:[2024-12-16 14:33:48.731089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe03140) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.731214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.731222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.731244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe03140, cid 7, qid 0 00:17:56.731 [2024-12-16 14:33:48.731295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.731302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.731306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.731310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe03140) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731352] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:56.731 [2024-12-16 14:33:48.731363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe026c0) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.731 [2024-12-16 14:33:48.731375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02840) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.731 [2024-12-16 14:33:48.731385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe029c0) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.731 [2024-12-16 14:33:48.731395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.731399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.731 [2024-12-16 14:33:48.731408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.731412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.731416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.731424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.731444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.735539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.735561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.735566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.735581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.735598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.735629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.735704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.735711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.735715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.735724] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:56.731 [2024-12-16 14:33:48.735729] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:56.731 [2024-12-16 14:33:48.735740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.735756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.735774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.735822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.735828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.735832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.735847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.735878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.735894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.735943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.735950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.735953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.735968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.735981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.735989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.736060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.736074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.736090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.736162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.736176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.736191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.736263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.736277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.736292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.736365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.736379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.736395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.731 [2024-12-16 14:33:48.736478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.731 [2024-12-16 14:33:48.736493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.731 [2024-12-16 14:33:48.736501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.731 [2024-12-16 14:33:48.736509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.731 [2024-12-16 14:33:48.736528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.731 [2024-12-16 14:33:48.736574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.731 [2024-12-16 14:33:48.736581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.736585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.736599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.736614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.736630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.736672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.736679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.736682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.736697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.736712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.736728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.736768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.736775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.736779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.736793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.736808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.736825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.736865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.736872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.736876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.736890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.736906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.736921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.736965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.736971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.736975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.736989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.736998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.732 [2024-12-16 14:33:48.737536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.732 [2024-12-16 14:33:48.737553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.732 [2024-12-16 14:33:48.737600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.732 [2024-12-16 14:33:48.737606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.732 [2024-12-16 14:33:48.737610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.732 [2024-12-16 14:33:48.737624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.732 [2024-12-16 14:33:48.737632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.737639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.737655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.737696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.737703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.737706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.737720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.737735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.737751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.737797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.737804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.737808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.737822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.737837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.737853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.737899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.737905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.737909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.737923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.737931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.737938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.737954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738011] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.738898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.733 [2024-12-16 14:33:48.738913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.733 [2024-12-16 14:33:48.738917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.733 [2024-12-16 14:33:48.738932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.733 [2024-12-16 14:33:48.738941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.733 [2024-12-16 14:33:48.738948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.733 [2024-12-16 14:33:48.738965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.733 [2024-12-16 14:33:48.739011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.739018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.739022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.739037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.734 [2024-12-16 14:33:48.739068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.734 [2024-12-16 14:33:48.739085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.734 [2024-12-16 14:33:48.739159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.739166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.739170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.739184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.734 [2024-12-16 14:33:48.739200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.734 [2024-12-16 14:33:48.739217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.734 [2024-12-16 14:33:48.739264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.739270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.739274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.739289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.734 [2024-12-16 14:33:48.739305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.734 [2024-12-16 14:33:48.739321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.734 [2024-12-16 14:33:48.739382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.739389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.739393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.739408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.734 [2024-12-16 14:33:48.739424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.734 [2024-12-16 14:33:48.739441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.734 [2024-12-16 14:33:48.739493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.739504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.739508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.739512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.743503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.743524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.743529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdc9a00) 00:17:56.734 [2024-12-16 14:33:48.743538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.734 [2024-12-16 14:33:48.743564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe02b40, cid 3, qid 0 00:17:56.734 [2024-12-16 14:33:48.743636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.734 [2024-12-16 14:33:48.743643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.734 [2024-12-16 14:33:48.743648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.734 [2024-12-16 14:33:48.743652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe02b40) on tqpair=0xdc9a00 00:17:56.734 [2024-12-16 14:33:48.743662] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:56.734 0% 00:17:56.734 Data Units Read: 0 00:17:56.734 Data Units Written: 0 00:17:56.734 Host Read Commands: 0 00:17:56.734 Host Write Commands: 0 00:17:56.734 Controller Busy Time: 0 minutes 00:17:56.734 Power Cycles: 0 00:17:56.734 Power On Hours: 0 hours 00:17:56.734 Unsafe Shutdowns: 0 00:17:56.734 Unrecoverable Media Errors: 0 00:17:56.734 Lifetime Error Log Entries: 0 00:17:56.734 Warning Temperature Time: 0 minutes 00:17:56.734 Critical Temperature Time: 0 minutes 00:17:56.734 00:17:56.734 Number of Queues 00:17:56.734 ================ 00:17:56.734 Number of I/O Submission Queues: 127 00:17:56.734 Number of I/O Completion Queues: 127 00:17:56.734 00:17:56.734 Active Namespaces 00:17:56.734 ================= 00:17:56.734 Namespace ID:1 00:17:56.734 Error Recovery Timeout: Unlimited 00:17:56.734 Command Set Identifier: NVM (00h) 00:17:56.734 Deallocate: Supported 00:17:56.734 Deallocated/Unwritten Error: Not Supported 00:17:56.734 Deallocated Read Value: Unknown 00:17:56.734 Deallocate in Write Zeroes: Not Supported 00:17:56.734 Deallocated Guard Field: 0xFFFF 00:17:56.734 Flush: Supported 00:17:56.734 Reservation: Supported 00:17:56.734 Namespace Sharing Capabilities: Multiple Controllers 00:17:56.734 Size (in LBAs): 131072 (0GiB) 00:17:56.734 Capacity (in LBAs): 131072 (0GiB) 00:17:56.734 Utilization (in LBAs): 131072 (0GiB) 00:17:56.734 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:56.734 EUI64: ABCDEF0123456789 00:17:56.734 UUID: 594918ee-eca7-4ee9-9f53-53cbad85f318 00:17:56.734 Thin Provisioning: Not Supported 00:17:56.734 Per-NS Atomic Units: Yes 00:17:56.734 Atomic Boundary Size (Normal): 0 00:17:56.734 Atomic Boundary Size (PFail): 0 00:17:56.734 Atomic Boundary Offset: 0 00:17:56.734 Maximum Single Source Range Length: 65535 00:17:56.734 Maximum Copy Length: 65535 00:17:56.734 Maximum Source Range Count: 1 00:17:56.734 NGUID/EUI64 Never Reused: No 00:17:56.734 Namespace Write Protected: No 00:17:56.734 Number of LBA Formats: 1 00:17:56.734 Current LBA Format: LBA Format #00 00:17:56.734 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:56.734 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:56.734 rmmod nvme_tcp 00:17:56.734 rmmod nvme_fabrics 00:17:56.734 rmmod nvme_keyring 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 89922 ']' 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 89922 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 89922 ']' 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 89922 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.734 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89922 00:17:56.994 killing process with pid 89922 00:17:56.994 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.994 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.994 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89922' 00:17:56.994 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 89922 00:17:56.994 14:33:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 89922 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:56.994 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:57.253 00:17:57.253 real 0m2.136s 00:17:57.253 user 0m4.256s 00:17:57.253 sys 0m0.695s 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.253 ************************************ 00:17:57.253 END TEST nvmf_identify 00:17:57.253 ************************************ 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.253 ************************************ 00:17:57.253 START TEST nvmf_perf 00:17:57.253 ************************************ 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:57.253 * Looking for test storage... 00:17:57.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:57.253 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:57.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.513 --rc genhtml_branch_coverage=1 00:17:57.513 --rc genhtml_function_coverage=1 00:17:57.513 --rc genhtml_legend=1 00:17:57.513 --rc geninfo_all_blocks=1 00:17:57.513 --rc geninfo_unexecuted_blocks=1 00:17:57.513 00:17:57.513 ' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:57.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.513 --rc genhtml_branch_coverage=1 00:17:57.513 --rc genhtml_function_coverage=1 00:17:57.513 --rc genhtml_legend=1 00:17:57.513 --rc geninfo_all_blocks=1 00:17:57.513 --rc geninfo_unexecuted_blocks=1 00:17:57.513 00:17:57.513 ' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:57.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.513 --rc genhtml_branch_coverage=1 00:17:57.513 --rc genhtml_function_coverage=1 00:17:57.513 --rc genhtml_legend=1 00:17:57.513 --rc geninfo_all_blocks=1 00:17:57.513 --rc geninfo_unexecuted_blocks=1 00:17:57.513 00:17:57.513 ' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:57.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.513 --rc genhtml_branch_coverage=1 00:17:57.513 --rc genhtml_function_coverage=1 00:17:57.513 --rc genhtml_legend=1 00:17:57.513 --rc geninfo_all_blocks=1 00:17:57.513 --rc geninfo_unexecuted_blocks=1 00:17:57.513 00:17:57.513 ' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.513 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.514 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:57.514 Cannot find device "nvmf_init_br" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:57.514 Cannot find device "nvmf_init_br2" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:57.514 Cannot find device "nvmf_tgt_br" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.514 Cannot find device "nvmf_tgt_br2" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:57.514 Cannot find device "nvmf_init_br" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:57.514 Cannot find device "nvmf_init_br2" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:57.514 Cannot find device "nvmf_tgt_br" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:57.514 Cannot find device "nvmf_tgt_br2" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:57.514 Cannot find device "nvmf_br" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:57.514 Cannot find device "nvmf_init_if" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:57.514 Cannot find device "nvmf_init_if2" 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.514 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:57.773 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:57.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:17:57.773 00:17:57.773 --- 10.0.0.3 ping statistics --- 00:17:57.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.774 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:57.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:57.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:57.774 00:17:57.774 --- 10.0.0.4 ping statistics --- 00:17:57.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.774 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:57.774 00:17:57.774 --- 10.0.0.1 ping statistics --- 00:17:57.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.774 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:57.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:57.774 00:17:57.774 --- 10.0.0.2 ping statistics --- 00:17:57.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.774 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=90175 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 90175 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 90175 ']' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.774 14:33:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.032 [2024-12-16 14:33:50.001140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:58.032 [2024-12-16 14:33:50.001238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.032 [2024-12-16 14:33:50.149062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.032 [2024-12-16 14:33:50.171147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.032 [2024-12-16 14:33:50.171342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.032 [2024-12-16 14:33:50.171548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.032 [2024-12-16 14:33:50.171780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.032 [2024-12-16 14:33:50.171852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.032 [2024-12-16 14:33:50.172768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.032 [2024-12-16 14:33:50.172945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.032 [2024-12-16 14:33:50.173551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.032 [2024-12-16 14:33:50.173556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.032 [2024-12-16 14:33:50.202818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:58.966 14:33:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:59.533 14:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:59.533 14:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:59.792 14:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:59.792 14:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.050 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:00.050 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:00.050 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:00.050 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:00.050 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.308 [2024-12-16 14:33:52.333075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.308 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.566 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.566 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.824 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.824 14:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:01.083 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.341 [2024-12-16 14:33:53.374543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.341 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:01.600 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:01.600 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:01.600 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:01.600 14:33:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:02.535 Initializing NVMe Controllers 00:18:02.535 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:02.535 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:02.535 Initialization complete. Launching workers. 00:18:02.535 ======================================================== 00:18:02.535 Latency(us) 00:18:02.535 Device Information : IOPS MiB/s Average min max 00:18:02.535 PCIE (0000:00:10.0) NSID 1 from core 0: 22432.00 87.62 1425.93 389.16 8107.26 00:18:02.535 ======================================================== 00:18:02.535 Total : 22432.00 87.62 1425.93 389.16 8107.26 00:18:02.535 00:18:02.792 14:33:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:04.172 Initializing NVMe Controllers 00:18:04.172 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.172 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.172 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.172 Initialization complete. Launching workers. 00:18:04.172 ======================================================== 00:18:04.172 Latency(us) 00:18:04.172 Device Information : IOPS MiB/s Average min max 00:18:04.172 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3827.06 14.95 260.98 96.43 7195.49 00:18:04.172 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.61 0.49 8023.80 4982.33 11989.13 00:18:04.172 ======================================================== 00:18:04.172 Total : 3952.67 15.44 507.67 96.43 11989.13 00:18:04.172 00:18:04.172 14:33:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:05.565 Initializing NVMe Controllers 00:18:05.565 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.565 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.565 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.565 Initialization complete. Launching workers. 00:18:05.565 ======================================================== 00:18:05.565 Latency(us) 00:18:05.565 Device Information : IOPS MiB/s Average min max 00:18:05.565 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8762.50 34.23 3652.04 681.70 9529.65 00:18:05.565 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3964.01 15.48 8073.91 5463.20 16175.45 00:18:05.565 ======================================================== 00:18:05.565 Total : 12726.52 49.71 5029.35 681.70 16175.45 00:18:05.565 00:18:05.565 14:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:05.565 14:33:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.098 Initializing NVMe Controllers 00:18:08.098 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.098 Controller IO queue size 128, less than required. 00:18:08.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.098 Controller IO queue size 128, less than required. 00:18:08.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.098 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.098 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.098 Initialization complete. Launching workers. 00:18:08.098 ======================================================== 00:18:08.098 Latency(us) 00:18:08.098 Device Information : IOPS MiB/s Average min max 00:18:08.098 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1766.92 441.73 73253.08 42891.91 121650.44 00:18:08.098 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 634.97 158.74 207870.54 59801.46 351373.49 00:18:08.098 ======================================================== 00:18:08.098 Total : 2401.89 600.47 108840.96 42891.91 351373.49 00:18:08.098 00:18:08.098 14:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:08.357 Initializing NVMe Controllers 00:18:08.357 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.357 Controller IO queue size 128, less than required. 00:18:08.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.357 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:08.357 Controller IO queue size 128, less than required. 00:18:08.357 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.357 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:08.357 WARNING: Some requested NVMe devices were skipped 00:18:08.357 No valid NVMe controllers or AIO or URING devices found 00:18:08.357 14:34:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:10.890 Initializing NVMe Controllers 00:18:10.890 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.890 Controller IO queue size 128, less than required. 00:18:10.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.890 Controller IO queue size 128, less than required. 00:18:10.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.890 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.890 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:10.890 Initialization complete. Launching workers. 00:18:10.890 00:18:10.890 ==================== 00:18:10.890 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:10.890 TCP transport: 00:18:10.890 polls: 9056 00:18:10.890 idle_polls: 4007 00:18:10.890 sock_completions: 5049 00:18:10.890 nvme_completions: 7363 00:18:10.890 submitted_requests: 11094 00:18:10.890 queued_requests: 1 00:18:10.890 00:18:10.890 ==================== 00:18:10.890 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:10.890 TCP transport: 00:18:10.890 polls: 10153 00:18:10.890 idle_polls: 5975 00:18:10.890 sock_completions: 4178 00:18:10.890 nvme_completions: 6807 00:18:10.890 submitted_requests: 10262 00:18:10.890 queued_requests: 1 00:18:10.890 ======================================================== 00:18:10.890 Latency(us) 00:18:10.890 Device Information : IOPS MiB/s Average min max 00:18:10.890 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1840.50 460.12 70804.14 39854.25 102381.57 00:18:10.890 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1701.50 425.37 75957.43 25854.17 117657.25 00:18:10.890 ======================================================== 00:18:10.890 Total : 3541.99 885.50 73279.66 25854.17 117657.25 00:18:10.890 00:18:10.890 14:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:10.890 14:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.148 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:11.148 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:11.148 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=5c9ac341-f305-49c7-a4a1-2801fedb5b6b 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 5c9ac341-f305-49c7-a4a1-2801fedb5b6b 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=5c9ac341-f305-49c7-a4a1-2801fedb5b6b 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:11.407 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:11.666 { 00:18:11.666 "uuid": "5c9ac341-f305-49c7-a4a1-2801fedb5b6b", 00:18:11.666 "name": "lvs_0", 00:18:11.666 "base_bdev": "Nvme0n1", 00:18:11.666 "total_data_clusters": 1278, 00:18:11.666 "free_clusters": 1278, 00:18:11.666 "block_size": 4096, 00:18:11.666 "cluster_size": 4194304 00:18:11.666 } 00:18:11.666 ]' 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="5c9ac341-f305-49c7-a4a1-2801fedb5b6b") .free_clusters' 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="5c9ac341-f305-49c7-a4a1-2801fedb5b6b") .cluster_size' 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:18:11.666 5112 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:11.666 14:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c9ac341-f305-49c7-a4a1-2801fedb5b6b lbd_0 5112 00:18:11.925 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=8f83d87a-53c2-4794-92ef-8bfa55e3f665 00:18:11.925 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 8f83d87a-53c2-4794-92ef-8bfa55e3f665 lvs_n_0 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=333041c0-4865-4bb6-b2f8-829d791f8e9e 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 333041c0-4865-4bb6-b2f8-829d791f8e9e 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=333041c0-4865-4bb6-b2f8-829d791f8e9e 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:12.184 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:12.443 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:12.443 { 00:18:12.443 "uuid": "5c9ac341-f305-49c7-a4a1-2801fedb5b6b", 00:18:12.443 "name": "lvs_0", 00:18:12.443 "base_bdev": "Nvme0n1", 00:18:12.443 "total_data_clusters": 1278, 00:18:12.443 "free_clusters": 0, 00:18:12.443 "block_size": 4096, 00:18:12.443 "cluster_size": 4194304 00:18:12.443 }, 00:18:12.443 { 00:18:12.443 "uuid": "333041c0-4865-4bb6-b2f8-829d791f8e9e", 00:18:12.443 "name": "lvs_n_0", 00:18:12.443 "base_bdev": "8f83d87a-53c2-4794-92ef-8bfa55e3f665", 00:18:12.443 "total_data_clusters": 1276, 00:18:12.443 "free_clusters": 1276, 00:18:12.443 "block_size": 4096, 00:18:12.443 "cluster_size": 4194304 00:18:12.443 } 00:18:12.443 ]' 00:18:12.443 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="333041c0-4865-4bb6-b2f8-829d791f8e9e") .free_clusters' 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="333041c0-4865-4bb6-b2f8-829d791f8e9e") .cluster_size' 00:18:12.702 5104 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:12.702 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 333041c0-4865-4bb6-b2f8-829d791f8e9e lbd_nest_0 5104 00:18:12.961 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=08c4489d-66fe-486f-9ddb-47c4baa229ed 00:18:12.961 14:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.220 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:13.220 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 08c4489d-66fe-486f-9ddb-47c4baa229ed 00:18:13.220 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.479 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:13.479 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:13.479 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:13.479 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:13.479 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:14.046 Initializing NVMe Controllers 00:18:14.046 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.046 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:14.046 WARNING: Some requested NVMe devices were skipped 00:18:14.046 No valid NVMe controllers or AIO or URING devices found 00:18:14.046 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:14.046 14:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:24.024 Initializing NVMe Controllers 00:18:24.024 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.024 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:24.024 Initialization complete. Launching workers. 00:18:24.024 ======================================================== 00:18:24.024 Latency(us) 00:18:24.024 Device Information : IOPS MiB/s Average min max 00:18:24.024 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 972.60 121.57 1027.29 330.02 8013.18 00:18:24.024 ======================================================== 00:18:24.024 Total : 972.60 121.57 1027.29 330.02 8013.18 00:18:24.024 00:18:24.283 14:34:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:24.283 14:34:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:24.283 14:34:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:24.541 Initializing NVMe Controllers 00:18:24.541 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:24.541 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:24.541 WARNING: Some requested NVMe devices were skipped 00:18:24.541 No valid NVMe controllers or AIO or URING devices found 00:18:24.541 14:34:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:24.541 14:34:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.750 Initializing NVMe Controllers 00:18:36.750 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.750 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.750 Initialization complete. Launching workers. 00:18:36.750 ======================================================== 00:18:36.750 Latency(us) 00:18:36.750 Device Information : IOPS MiB/s Average min max 00:18:36.750 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1373.70 171.71 23332.49 6387.95 63863.67 00:18:36.750 ======================================================== 00:18:36.750 Total : 1373.70 171.71 23332.49 6387.95 63863.67 00:18:36.750 00:18:36.750 14:34:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:36.750 14:34:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.750 14:34:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.750 Initializing NVMe Controllers 00:18:36.750 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.750 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:36.750 WARNING: Some requested NVMe devices were skipped 00:18:36.750 No valid NVMe controllers or AIO or URING devices found 00:18:36.750 14:34:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.750 14:34:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:46.727 Initializing NVMe Controllers 00:18:46.727 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.727 Controller IO queue size 128, less than required. 00:18:46.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.727 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.727 Initialization complete. Launching workers. 00:18:46.727 ======================================================== 00:18:46.727 Latency(us) 00:18:46.727 Device Information : IOPS MiB/s Average min max 00:18:46.727 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4209.83 526.23 30425.07 7056.89 57943.69 00:18:46.727 ======================================================== 00:18:46.727 Total : 4209.83 526.23 30425.07 7056.89 57943.69 00:18:46.727 00:18:46.727 14:34:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.727 14:34:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08c4489d-66fe-486f-9ddb-47c4baa229ed 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8f83d87a-53c2-4794-92ef-8bfa55e3f665 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.727 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.986 rmmod nvme_tcp 00:18:46.986 rmmod nvme_fabrics 00:18:46.986 rmmod nvme_keyring 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 90175 ']' 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 90175 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 90175 ']' 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 90175 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.986 14:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90175 00:18:46.986 killing process with pid 90175 00:18:46.986 14:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.986 14:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.986 14:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90175' 00:18:46.986 14:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 90175 00:18:46.986 14:34:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 90175 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.362 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:48.363 ************************************ 00:18:48.363 END TEST nvmf_perf 00:18:48.363 ************************************ 00:18:48.363 00:18:48.363 real 0m51.162s 00:18:48.363 user 3m12.417s 00:18:48.363 sys 0m12.885s 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.363 14:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 ************************************ 00:18:48.622 START TEST nvmf_fio_host 00:18:48.622 ************************************ 00:18:48.622 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:48.622 * Looking for test storage... 00:18:48.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.623 --rc genhtml_branch_coverage=1 00:18:48.623 --rc genhtml_function_coverage=1 00:18:48.623 --rc genhtml_legend=1 00:18:48.623 --rc geninfo_all_blocks=1 00:18:48.623 --rc geninfo_unexecuted_blocks=1 00:18:48.623 00:18:48.623 ' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.623 --rc genhtml_branch_coverage=1 00:18:48.623 --rc genhtml_function_coverage=1 00:18:48.623 --rc genhtml_legend=1 00:18:48.623 --rc geninfo_all_blocks=1 00:18:48.623 --rc geninfo_unexecuted_blocks=1 00:18:48.623 00:18:48.623 ' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.623 --rc genhtml_branch_coverage=1 00:18:48.623 --rc genhtml_function_coverage=1 00:18:48.623 --rc genhtml_legend=1 00:18:48.623 --rc geninfo_all_blocks=1 00:18:48.623 --rc geninfo_unexecuted_blocks=1 00:18:48.623 00:18:48.623 ' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.623 --rc genhtml_branch_coverage=1 00:18:48.623 --rc genhtml_function_coverage=1 00:18:48.623 --rc genhtml_legend=1 00:18:48.623 --rc geninfo_all_blocks=1 00:18:48.623 --rc geninfo_unexecuted_blocks=1 00:18:48.623 00:18:48.623 ' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.623 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.624 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.624 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:48.883 Cannot find device "nvmf_init_br" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:48.883 Cannot find device "nvmf_init_br2" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:48.883 Cannot find device "nvmf_tgt_br" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.883 Cannot find device "nvmf_tgt_br2" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:48.883 Cannot find device "nvmf_init_br" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:48.883 Cannot find device "nvmf_init_br2" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:48.883 Cannot find device "nvmf_tgt_br" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:48.883 Cannot find device "nvmf_tgt_br2" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:48.883 Cannot find device "nvmf_br" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:48.883 Cannot find device "nvmf_init_if" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:48.883 Cannot find device "nvmf_init_if2" 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.883 14:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:48.883 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:49.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:49.143 00:18:49.143 --- 10.0.0.3 ping statistics --- 00:18:49.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.143 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:49.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:49.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:18:49.143 00:18:49.143 --- 10.0.0.4 ping statistics --- 00:18:49.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.143 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:49.143 00:18:49.143 --- 10.0.0.1 ping statistics --- 00:18:49.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.143 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:49.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:49.143 00:18:49.143 --- 10.0.0.2 ping statistics --- 00:18:49.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.143 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=91040 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.143 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 91040 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 91040 ']' 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.144 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.144 [2024-12-16 14:34:41.305611] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:49.144 [2024-12-16 14:34:41.305707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.402 [2024-12-16 14:34:41.453563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.402 [2024-12-16 14:34:41.473068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.402 [2024-12-16 14:34:41.473250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.402 [2024-12-16 14:34:41.473411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.402 [2024-12-16 14:34:41.473621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.402 [2024-12-16 14:34:41.473726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.402 [2024-12-16 14:34:41.474521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.402 [2024-12-16 14:34:41.474649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.402 [2024-12-16 14:34:41.474688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.402 [2024-12-16 14:34:41.474690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.402 [2024-12-16 14:34:41.502607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.403 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.403 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:49.403 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:49.661 [2024-12-16 14:34:41.859716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.919 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:49.919 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.919 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.919 14:34:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:50.178 Malloc1 00:18:50.178 14:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:50.435 14:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:50.694 14:34:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:50.952 [2024-12-16 14:34:42.994341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:50.952 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:51.211 14:34:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:51.469 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:51.469 fio-3.35 00:18:51.469 Starting 1 thread 00:18:54.000 00:18:54.000 test: (groupid=0, jobs=1): err= 0: pid=91110: Mon Dec 16 14:34:45 2024 00:18:54.000 read: IOPS=9433, BW=36.8MiB/s (38.6MB/s)(73.9MiB/2006msec) 00:18:54.000 slat (nsec): min=1857, max=3157.2k, avg=2550.66, stdev=23154.30 00:18:54.000 clat (usec): min=2546, max=12450, avg=7067.19, stdev=597.45 00:18:54.000 lat (usec): min=2587, max=12452, avg=7069.74, stdev=597.15 00:18:54.000 clat percentiles (usec): 00:18:54.000 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:18:54.000 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7111], 00:18:54.000 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:18:54.000 | 99.00th=[ 8717], 99.50th=[ 9503], 99.90th=[11600], 99.95th=[11994], 00:18:54.000 | 99.99th=[12387] 00:18:54.000 bw ( KiB/s): min=37072, max=38744, per=99.95%, avg=37716.00, stdev=717.88, samples=4 00:18:54.000 iops : min= 9268, max= 9686, avg=9429.00, stdev=179.47, samples=4 00:18:54.000 write: IOPS=9434, BW=36.9MiB/s (38.6MB/s)(73.9MiB/2006msec); 0 zone resets 00:18:54.000 slat (nsec): min=1881, max=242589, avg=2423.44, stdev=2311.04 00:18:54.000 clat (usec): min=2384, max=12188, avg=6445.55, stdev=539.52 00:18:54.000 lat (usec): min=2398, max=12191, avg=6447.97, stdev=539.44 00:18:54.000 clat percentiles (usec): 00:18:54.000 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6063], 00:18:54.000 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:18:54.000 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7242], 00:18:54.000 | 99.00th=[ 7898], 99.50th=[ 8717], 99.90th=[10945], 99.95th=[11338], 00:18:54.000 | 99.99th=[11600] 00:18:54.000 bw ( KiB/s): min=37504, max=37888, per=99.98%, avg=37730.00, stdev=178.21, samples=4 00:18:54.000 iops : min= 9376, max= 9472, avg=9432.50, stdev=44.55, samples=4 00:18:54.000 lat (msec) : 4=0.09%, 10=99.52%, 20=0.39% 00:18:54.000 cpu : usr=70.67%, sys=22.19%, ctx=27, majf=0, minf=6 00:18:54.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:54.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.000 issued rwts: total=18924,18926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.000 00:18:54.000 Run status group 0 (all jobs): 00:18:54.000 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.9MiB (77.5MB), run=2006-2006msec 00:18:54.000 WRITE: bw=36.9MiB/s (38.6MB/s), 36.9MiB/s-36.9MiB/s (38.6MB/s-38.6MB/s), io=73.9MiB (77.5MB), run=2006-2006msec 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:54.000 14:34:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.000 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:54.000 fio-3.35 00:18:54.000 Starting 1 thread 00:18:56.530 00:18:56.530 test: (groupid=0, jobs=1): err= 0: pid=91157: Mon Dec 16 14:34:48 2024 00:18:56.530 read: IOPS=9246, BW=144MiB/s (151MB/s)(290MiB/2005msec) 00:18:56.530 slat (usec): min=2, max=117, avg= 3.39, stdev= 2.23 00:18:56.530 clat (usec): min=1800, max=15401, avg=7821.65, stdev=2273.83 00:18:56.530 lat (usec): min=1803, max=15404, avg=7825.04, stdev=2273.89 00:18:56.530 clat percentiles (usec): 00:18:56.530 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5800], 00:18:56.530 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[ 8291], 00:18:56.530 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11731], 00:18:56.530 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15139], 99.95th=[15270], 00:18:56.530 | 99.99th=[15401] 00:18:56.530 bw ( KiB/s): min=65856, max=81312, per=49.30%, avg=72933.00, stdev=6416.98, samples=4 00:18:56.530 iops : min= 4116, max= 5082, avg=4558.25, stdev=401.08, samples=4 00:18:56.530 write: IOPS=5366, BW=83.8MiB/s (87.9MB/s)(149MiB/1778msec); 0 zone resets 00:18:56.530 slat (usec): min=31, max=172, avg=34.96, stdev= 7.47 00:18:56.530 clat (usec): min=3211, max=19686, avg=10922.11, stdev=1914.37 00:18:56.530 lat (usec): min=3243, max=19720, avg=10957.07, stdev=1914.47 00:18:56.530 clat percentiles (usec): 00:18:56.530 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9372], 00:18:56.530 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11207], 00:18:56.530 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13566], 95.00th=[14353], 00:18:56.530 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:18:56.530 | 99.99th=[19792] 00:18:56.530 bw ( KiB/s): min=68960, max=84352, per=88.48%, avg=75967.75, stdev=6348.50, samples=4 00:18:56.530 iops : min= 4310, max= 5272, avg=4747.75, stdev=396.84, samples=4 00:18:56.530 lat (msec) : 2=0.01%, 4=1.41%, 10=64.73%, 20=33.85% 00:18:56.530 cpu : usr=83.13%, sys=12.67%, ctx=3, majf=0, minf=2 00:18:56.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:56.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.530 issued rwts: total=18539,9541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.530 00:18:56.530 Run status group 0 (all jobs): 00:18:56.530 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=290MiB (304MB), run=2005-2005msec 00:18:56.530 WRITE: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=149MiB (156MB), run=1778-1778msec 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:56.530 14:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:57.097 Nvme0n1 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=725d423a-64d7-4518-a903-7b9c91be6555 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 725d423a-64d7-4518-a903-7b9c91be6555 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=725d423a-64d7-4518-a903-7b9c91be6555 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:18:57.097 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:57.355 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:57.355 { 00:18:57.355 "uuid": "725d423a-64d7-4518-a903-7b9c91be6555", 00:18:57.355 "name": "lvs_0", 00:18:57.355 "base_bdev": "Nvme0n1", 00:18:57.355 "total_data_clusters": 4, 00:18:57.355 "free_clusters": 4, 00:18:57.355 "block_size": 4096, 00:18:57.355 "cluster_size": 1073741824 00:18:57.355 } 00:18:57.355 ]' 00:18:57.355 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="725d423a-64d7-4518-a903-7b9c91be6555") .free_clusters' 00:18:57.355 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:18:57.355 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="725d423a-64d7-4518-a903-7b9c91be6555") .cluster_size' 00:18:57.613 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:18:57.613 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:18:57.613 4096 00:18:57.613 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:18:57.613 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:57.872 24eef2c4-e249-4528-a9ae-47a63526d0db 00:18:57.872 14:34:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:58.130 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:58.388 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:58.647 14:34:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.647 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:58.647 fio-3.35 00:18:58.647 Starting 1 thread 00:19:01.179 00:19:01.179 test: (groupid=0, jobs=1): err= 0: pid=91267: Mon Dec 16 14:34:53 2024 00:19:01.180 read: IOPS=6237, BW=24.4MiB/s (25.5MB/s)(48.9MiB/2009msec) 00:19:01.180 slat (nsec): min=1875, max=317152, avg=2829.16, stdev=4091.12 00:19:01.180 clat (usec): min=3047, max=19003, avg=10732.76, stdev=913.44 00:19:01.180 lat (usec): min=3056, max=19005, avg=10735.59, stdev=913.06 00:19:01.180 clat percentiles (usec): 00:19:01.180 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:19:01.180 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:19:01.180 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:19:01.180 | 99.00th=[12911], 99.50th=[13173], 99.90th=[17957], 99.95th=[18220], 00:19:01.180 | 99.99th=[18482] 00:19:01.180 bw ( KiB/s): min=24023, max=25328, per=99.84%, avg=24911.75, stdev=598.55, samples=4 00:19:01.180 iops : min= 6005, max= 6332, avg=6227.75, stdev=150.01, samples=4 00:19:01.180 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(48.9MiB/2009msec); 0 zone resets 00:19:01.180 slat (nsec): min=1991, max=254712, avg=2898.31, stdev=3080.17 00:19:01.180 clat (usec): min=2409, max=19056, avg=9732.55, stdev=845.53 00:19:01.180 lat (usec): min=2423, max=19059, avg=9735.44, stdev=845.35 00:19:01.180 clat percentiles (usec): 00:19:01.180 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 9110], 00:19:01.180 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:19:01.180 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:19:01.180 | 99.00th=[11600], 99.50th=[11863], 99.90th=[16188], 99.95th=[17171], 00:19:01.180 | 99.99th=[19006] 00:19:01.180 bw ( KiB/s): min=24664, max=25128, per=99.93%, avg=24911.25, stdev=205.30, samples=4 00:19:01.180 iops : min= 6166, max= 6282, avg=6227.75, stdev=51.28, samples=4 00:19:01.180 lat (msec) : 4=0.06%, 10=41.47%, 20=58.47% 00:19:01.180 cpu : usr=72.86%, sys=21.26%, ctx=6, majf=0, minf=6 00:19:01.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:01.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:01.180 issued rwts: total=12531,12520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:01.180 00:19:01.180 Run status group 0 (all jobs): 00:19:01.180 READ: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2009-2009msec 00:19:01.180 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2009-2009msec 00:19:01.180 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:01.180 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1d53a3ba-bf72-4801-8c64-0c98c0f406bc 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1d53a3ba-bf72-4801-8c64-0c98c0f406bc 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=1d53a3ba-bf72-4801-8c64-0c98c0f406bc 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:01.438 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:01.697 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:01.697 { 00:19:01.697 "uuid": "725d423a-64d7-4518-a903-7b9c91be6555", 00:19:01.697 "name": "lvs_0", 00:19:01.697 "base_bdev": "Nvme0n1", 00:19:01.697 "total_data_clusters": 4, 00:19:01.697 "free_clusters": 0, 00:19:01.697 "block_size": 4096, 00:19:01.697 "cluster_size": 1073741824 00:19:01.697 }, 00:19:01.697 { 00:19:01.697 "uuid": "1d53a3ba-bf72-4801-8c64-0c98c0f406bc", 00:19:01.697 "name": "lvs_n_0", 00:19:01.697 "base_bdev": "24eef2c4-e249-4528-a9ae-47a63526d0db", 00:19:01.697 "total_data_clusters": 1022, 00:19:01.697 "free_clusters": 1022, 00:19:01.697 "block_size": 4096, 00:19:01.697 "cluster_size": 4194304 00:19:01.697 } 00:19:01.697 ]' 00:19:01.697 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1d53a3ba-bf72-4801-8c64-0c98c0f406bc") .free_clusters' 00:19:01.697 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:19:01.697 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1d53a3ba-bf72-4801-8c64-0c98c0f406bc") .cluster_size' 00:19:01.956 4088 00:19:01.956 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:01.956 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:19:01.956 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:19:01.956 14:34:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:01.956 4ce0083d-98b8-4e8d-bb01-cacf78e9b71a 00:19:02.214 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:02.214 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.781 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:02.782 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:03.040 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:03.040 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:03.040 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:03.040 14:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:03.040 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:03.040 fio-3.35 00:19:03.040 Starting 1 thread 00:19:05.572 00:19:05.572 test: (groupid=0, jobs=1): err= 0: pid=91341: Mon Dec 16 14:34:57 2024 00:19:05.572 read: IOPS=5722, BW=22.4MiB/s (23.4MB/s)(44.9MiB/2009msec) 00:19:05.572 slat (nsec): min=1979, max=258876, avg=2860.31, stdev=3716.90 00:19:05.572 clat (usec): min=3231, max=20820, avg=11693.40, stdev=992.83 00:19:05.572 lat (usec): min=3239, max=20823, avg=11696.26, stdev=992.53 00:19:05.572 clat percentiles (usec): 00:19:05.572 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:19:05.572 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:19:05.572 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:19:05.572 | 99.00th=[13960], 99.50th=[14222], 99.90th=[19268], 99.95th=[20579], 00:19:05.572 | 99.99th=[20841] 00:19:05.572 bw ( KiB/s): min=21920, max=23568, per=99.96%, avg=22882.00, stdev=697.86, samples=4 00:19:05.572 iops : min= 5480, max= 5892, avg=5720.50, stdev=174.47, samples=4 00:19:05.572 write: IOPS=5714, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec); 0 zone resets 00:19:05.572 slat (usec): min=2, max=194, avg= 2.93, stdev= 2.82 00:19:05.572 clat (usec): min=2090, max=19599, avg=10593.57, stdev=941.42 00:19:05.572 lat (usec): min=2101, max=19603, avg=10596.51, stdev=941.29 00:19:05.572 clat percentiles (usec): 00:19:05.572 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:19:05.572 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:19:05.572 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:19:05.572 | 99.00th=[12649], 99.50th=[13173], 99.90th=[18482], 99.95th=[19268], 00:19:05.572 | 99.99th=[19530] 00:19:05.572 bw ( KiB/s): min=22656, max=22904, per=99.83%, avg=22818.00, stdev=110.54, samples=4 00:19:05.572 iops : min= 5664, max= 5726, avg=5704.50, stdev=27.63, samples=4 00:19:05.572 lat (msec) : 4=0.06%, 10=13.14%, 20=86.76%, 50=0.03% 00:19:05.572 cpu : usr=74.15%, sys=20.57%, ctx=6, majf=0, minf=6 00:19:05.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:05.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.572 issued rwts: total=11497,11480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.572 00:19:05.572 Run status group 0 (all jobs): 00:19:05.572 READ: bw=22.4MiB/s (23.4MB/s), 22.4MiB/s-22.4MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2009-2009msec 00:19:05.572 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:19:05.572 14:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:05.572 14:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:05.572 14:34:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:05.831 14:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:06.398 14:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:06.398 14:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:06.656 14:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:06.915 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:06.915 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:06.915 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:06.915 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:06.915 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.175 rmmod nvme_tcp 00:19:07.175 rmmod nvme_fabrics 00:19:07.175 rmmod nvme_keyring 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 91040 ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 91040 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 91040 ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 91040 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91040 00:19:07.175 killing process with pid 91040 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91040' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 91040 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 91040 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:07.175 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:07.434 00:19:07.434 real 0m18.977s 00:19:07.434 user 1m23.048s 00:19:07.434 sys 0m4.377s 00:19:07.434 ************************************ 00:19:07.434 END TEST nvmf_fio_host 00:19:07.434 ************************************ 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.434 ************************************ 00:19:07.434 START TEST nvmf_failover 00:19:07.434 ************************************ 00:19:07.434 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:07.695 * Looking for test storage... 00:19:07.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.695 --rc genhtml_branch_coverage=1 00:19:07.695 --rc genhtml_function_coverage=1 00:19:07.695 --rc genhtml_legend=1 00:19:07.695 --rc geninfo_all_blocks=1 00:19:07.695 --rc geninfo_unexecuted_blocks=1 00:19:07.695 00:19:07.695 ' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.695 --rc genhtml_branch_coverage=1 00:19:07.695 --rc genhtml_function_coverage=1 00:19:07.695 --rc genhtml_legend=1 00:19:07.695 --rc geninfo_all_blocks=1 00:19:07.695 --rc geninfo_unexecuted_blocks=1 00:19:07.695 00:19:07.695 ' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.695 --rc genhtml_branch_coverage=1 00:19:07.695 --rc genhtml_function_coverage=1 00:19:07.695 --rc genhtml_legend=1 00:19:07.695 --rc geninfo_all_blocks=1 00:19:07.695 --rc geninfo_unexecuted_blocks=1 00:19:07.695 00:19:07.695 ' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.695 --rc genhtml_branch_coverage=1 00:19:07.695 --rc genhtml_function_coverage=1 00:19:07.695 --rc genhtml_legend=1 00:19:07.695 --rc geninfo_all_blocks=1 00:19:07.695 --rc geninfo_unexecuted_blocks=1 00:19:07.695 00:19:07.695 ' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.695 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.696 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:07.696 Cannot find device "nvmf_init_br" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:07.696 Cannot find device "nvmf_init_br2" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:07.696 Cannot find device "nvmf_tgt_br" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.696 Cannot find device "nvmf_tgt_br2" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:07.696 Cannot find device "nvmf_init_br" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:07.696 Cannot find device "nvmf_init_br2" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:07.696 Cannot find device "nvmf_tgt_br" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:07.696 Cannot find device "nvmf_tgt_br2" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:07.696 Cannot find device "nvmf_br" 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:07.696 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:07.955 Cannot find device "nvmf_init_if" 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:07.956 Cannot find device "nvmf_init_if2" 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.956 14:34:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:19:07.956 00:19:07.956 --- 10.0.0.3 ping statistics --- 00:19:07.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.956 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.956 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.956 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:19:07.956 00:19:07.956 --- 10.0.0.4 ping statistics --- 00:19:07.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.956 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:07.956 00:19:07.956 --- 10.0.0.1 ping statistics --- 00:19:07.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.956 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:07.956 00:19:07.956 --- 10.0.0.2 ping statistics --- 00:19:07.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.956 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.956 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=91635 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 91635 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91635 ']' 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.215 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.215 [2024-12-16 14:35:00.219953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:08.215 [2024-12-16 14:35:00.220054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.215 [2024-12-16 14:35:00.369151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:08.215 [2024-12-16 14:35:00.388106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.215 [2024-12-16 14:35:00.388320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.215 [2024-12-16 14:35:00.388395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.215 [2024-12-16 14:35:00.388548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.215 [2024-12-16 14:35:00.388625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.215 [2024-12-16 14:35:00.389359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.215 [2024-12-16 14:35:00.389536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.215 [2024-12-16 14:35:00.389627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.475 [2024-12-16 14:35:00.418612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.475 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:08.733 [2024-12-16 14:35:00.739161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.733 14:35:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:08.992 Malloc0 00:19:08.992 14:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.251 14:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.509 14:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:09.768 [2024-12-16 14:35:01.802950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:09.768 14:35:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:10.027 [2024-12-16 14:35:02.019124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:10.027 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:10.285 [2024-12-16 14:35:02.235310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=91685 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 91685 /var/tmp/bdevperf.sock 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91685 ']' 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.286 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:10.545 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.545 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:10.545 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:10.803 NVMe0n1 00:19:10.803 14:35:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:11.062 00:19:11.062 14:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=91701 00:19:11.062 14:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.062 14:35:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:12.439 14:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.439 [2024-12-16 14:35:04.456959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.439 [2024-12-16 14:35:04.457181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.457996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 [2024-12-16 14:35:04.458003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466df0 is same with the state(6) to be set 00:19:12.440 14:35:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:15.773 14:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:15.773 00:19:15.773 14:35:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:16.031 [2024-12-16 14:35:08.128881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467a50 is same with the state(6) to be set 00:19:16.031 [2024-12-16 14:35:08.128941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467a50 is same with the state(6) to be set 00:19:16.031 [2024-12-16 14:35:08.128952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467a50 is same with the state(6) to be set 00:19:16.031 [2024-12-16 14:35:08.128960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467a50 is same with the state(6) to be set 00:19:16.031 [2024-12-16 14:35:08.128968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467a50 is same with the state(6) to be set 00:19:16.031 14:35:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:19.317 14:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.317 [2024-12-16 14:35:11.395926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.317 14:35:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:20.252 14:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:20.511 14:35:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 91701 00:19:27.085 { 00:19:27.085 "results": [ 00:19:27.085 { 00:19:27.085 "job": "NVMe0n1", 00:19:27.085 "core_mask": "0x1", 00:19:27.085 "workload": "verify", 00:19:27.085 "status": "finished", 00:19:27.085 "verify_range": { 00:19:27.085 "start": 0, 00:19:27.085 "length": 16384 00:19:27.085 }, 00:19:27.085 "queue_depth": 128, 00:19:27.085 "io_size": 4096, 00:19:27.085 "runtime": 15.00817, 00:19:27.085 "iops": 10022.474425596192, 00:19:27.085 "mibps": 39.150290724985126, 00:19:27.085 "io_failed": 3693, 00:19:27.085 "io_timeout": 0, 00:19:27.085 "avg_latency_us": 12435.470915131382, 00:19:27.085 "min_latency_us": 588.3345454545455, 00:19:27.086 "max_latency_us": 15847.796363636364 00:19:27.086 } 00:19:27.086 ], 00:19:27.086 "core_count": 1 00:19:27.086 } 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 91685 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91685 ']' 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91685 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91685 00:19:27.086 killing process with pid 91685 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91685' 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91685 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91685 00:19:27.086 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:27.086 [2024-12-16 14:35:02.291055] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:27.086 [2024-12-16 14:35:02.291163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91685 ] 00:19:27.086 [2024-12-16 14:35:02.426587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.086 [2024-12-16 14:35:02.445333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.086 [2024-12-16 14:35:02.472834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.086 Running I/O for 15 seconds... 00:19:27.086 7829.00 IOPS, 30.58 MiB/s [2024-12-16T14:35:19.286Z] [2024-12-16 14:35:04.458057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.458974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.458989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.459002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.459017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.459031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.459060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.459074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.459102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.459116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.086 [2024-12-16 14:35:04.459129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.086 [2024-12-16 14:35:04.459144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.459985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.459998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.087 [2024-12-16 14:35:04.460272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.087 [2024-12-16 14:35:04.460286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.460985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.460998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.088 [2024-12-16 14:35:04.461410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.088 [2024-12-16 14:35:04.461443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.088 [2024-12-16 14:35:04.461485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.088 [2024-12-16 14:35:04.461500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:04.461851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:04.461878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629470 is same with the state(6) to be set 00:19:27.089 [2024-12-16 14:35:04.461908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.089 [2024-12-16 14:35:04.461917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.089 [2024-12-16 14:35:04.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72872 len:8 PRP1 0x0 PRP2 0x0 00:19:27.089 [2024-12-16 14:35:04.461939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.461986] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:27.089 [2024-12-16 14:35:04.462040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:04.462060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.462074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:04.462086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.462099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:04.462111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.462126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:04.462140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:04.462153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:27.089 [2024-12-16 14:35:04.462206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1608a90 (9): Bad file descriptor 00:19:27.089 [2024-12-16 14:35:04.465708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:27.089 [2024-12-16 14:35:04.491005] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:27.089 8642.00 IOPS, 33.76 MiB/s [2024-12-16T14:35:19.289Z] 9196.00 IOPS, 35.92 MiB/s [2024-12-16T14:35:19.289Z] 9517.00 IOPS, 37.18 MiB/s [2024-12-16T14:35:19.289Z] [2024-12-16 14:35:08.127386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:08.127472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.127514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:08.127529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.127542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:08.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.127567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.089 [2024-12-16 14:35:08.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.127591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1608a90 is same with the state(6) to be set 00:19:27.089 [2024-12-16 14:35:08.129413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.089 [2024-12-16 14:35:08.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.089 [2024-12-16 14:35:08.129884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.089 [2024-12-16 14:35:08.129897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.129911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.129923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.129937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.129950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.129964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.129978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.129992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.090 [2024-12-16 14:35:08.130841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.090 [2024-12-16 14:35:08.130894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.090 [2024-12-16 14:35:08.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.130924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.130938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.130953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.130967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.130981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.130995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.091 [2024-12-16 14:35:08.131782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.131980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.131995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.132008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.132022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.132035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.132048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.132061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.091 [2024-12-16 14:35:08.132075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.091 [2024-12-16 14:35:08.132088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.092 [2024-12-16 14:35:08.132338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.092 [2024-12-16 14:35:08.132786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162c970 is same with the state(6) to be set 00:19:27.092 [2024-12-16 14:35:08.132814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.132824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.132834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120264 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.132846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.132872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.132882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120752 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.132894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.132916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.132925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120760 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.132960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.132969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120768 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.132981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.132994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.133025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.133037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120784 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.133076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.133088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120792 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.133120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.133132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120800 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.133163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.133175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120808 len:8 PRP1 0x0 PRP2 0x0 00:19:27.092 [2024-12-16 14:35:08.133206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.092 [2024-12-16 14:35:08.133219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.092 [2024-12-16 14:35:08.133230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.092 [2024-12-16 14:35:08.133240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120816 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.093 [2024-12-16 14:35:08.133274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.093 [2024-12-16 14:35:08.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120824 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.093 [2024-12-16 14:35:08.133317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.093 [2024-12-16 14:35:08.133327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120832 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.093 [2024-12-16 14:35:08.133362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.093 [2024-12-16 14:35:08.133371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120840 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.093 [2024-12-16 14:35:08.133412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.093 [2024-12-16 14:35:08.133421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120848 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.093 [2024-12-16 14:35:08.133467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.093 [2024-12-16 14:35:08.133477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120856 len:8 PRP1 0x0 PRP2 0x0 00:19:27.093 [2024-12-16 14:35:08.133489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:08.133534] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:27.093 [2024-12-16 14:35:08.133551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:27.093 [2024-12-16 14:35:08.137178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:27.093 [2024-12-16 14:35:08.137214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1608a90 (9): Bad file descriptor 00:19:27.093 [2024-12-16 14:35:08.165116] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:27.093 9575.20 IOPS, 37.40 MiB/s [2024-12-16T14:35:19.293Z] 9700.67 IOPS, 37.89 MiB/s [2024-12-16T14:35:19.293Z] 9786.86 IOPS, 38.23 MiB/s [2024-12-16T14:35:19.293Z] 9854.50 IOPS, 38.49 MiB/s [2024-12-16T14:35:19.293Z] 9899.11 IOPS, 38.67 MiB/s [2024-12-16T14:35:19.293Z] [2024-12-16 14:35:12.665893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.665954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.665997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.093 [2024-12-16 14:35:12.666841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.093 [2024-12-16 14:35:12.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.093 [2024-12-16 14:35:12.666887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.666902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.666917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.666931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.666947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.666961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.666990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.094 [2024-12-16 14:35:12.667540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.667983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.667999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.668012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.668027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.668041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.668056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.668069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.668098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.094 [2024-12-16 14:35:12.668113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.094 [2024-12-16 14:35:12.668127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.668871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.668978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.668993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.095 [2024-12-16 14:35:12.669338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.669366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.095 [2024-12-16 14:35:12.669380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.095 [2024-12-16 14:35:12.669393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.096 [2024-12-16 14:35:12.669634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.669983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.669998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.670011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.670067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.670095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.096 [2024-12-16 14:35:12.670122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bcc90 is same with the state(6) to be set 00:19:27.096 [2024-12-16 14:35:12.670167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.096 [2024-12-16 14:35:12.670177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.096 [2024-12-16 14:35:12.670188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107784 len:8 PRP1 0x0 PRP2 0x0 00:19:27.096 [2024-12-16 14:35:12.670200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670248] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:27.096 [2024-12-16 14:35:12.670302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.096 [2024-12-16 14:35:12.670323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.096 [2024-12-16 14:35:12.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.096 [2024-12-16 14:35:12.670374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.096 [2024-12-16 14:35:12.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.096 [2024-12-16 14:35:12.670423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:27.096 [2024-12-16 14:35:12.670454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1608a90 (9): Bad file descriptor 00:19:27.096 [2024-12-16 14:35:12.674125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:27.096 [2024-12-16 14:35:12.703867] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:27.096 9881.90 IOPS, 38.60 MiB/s [2024-12-16T14:35:19.296Z] 9929.00 IOPS, 38.79 MiB/s [2024-12-16T14:35:19.296Z] 9952.25 IOPS, 38.88 MiB/s [2024-12-16T14:35:19.296Z] 9980.54 IOPS, 38.99 MiB/s [2024-12-16T14:35:19.296Z] 10003.64 IOPS, 39.08 MiB/s [2024-12-16T14:35:19.296Z] 10023.67 IOPS, 39.15 MiB/s 00:19:27.096 Latency(us) 00:19:27.096 [2024-12-16T14:35:19.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.096 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:27.096 Verification LBA range: start 0x0 length 0x4000 00:19:27.096 NVMe0n1 : 15.01 10022.47 39.15 246.07 0.00 12435.47 588.33 15847.80 00:19:27.096 [2024-12-16T14:35:19.296Z] =================================================================================================================== 00:19:27.096 [2024-12-16T14:35:19.296Z] Total : 10022.47 39.15 246.07 0.00 12435.47 588.33 15847.80 00:19:27.096 Received shutdown signal, test time was about 15.000000 seconds 00:19:27.096 00:19:27.096 Latency(us) 00:19:27.096 [2024-12-16T14:35:19.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.096 [2024-12-16T14:35:19.296Z] =================================================================================================================== 00:19:27.096 [2024-12-16T14:35:19.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:27.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=91874 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 91874 /var/tmp/bdevperf.sock 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91874 ']' 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.096 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:27.097 14:35:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:27.097 [2024-12-16 14:35:19.059759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:27.097 14:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:27.355 [2024-12-16 14:35:19.368084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:27.355 14:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:27.613 NVMe0n1 00:19:27.613 14:35:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:27.872 00:19:27.872 14:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.439 00:19:28.439 14:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.439 14:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:28.698 14:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.956 14:35:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:32.238 14:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.238 14:35:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:32.238 14:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=91949 00:19:32.238 14:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.238 14:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 91949 00:19:33.173 { 00:19:33.173 "results": [ 00:19:33.173 { 00:19:33.173 "job": "NVMe0n1", 00:19:33.173 "core_mask": "0x1", 00:19:33.173 "workload": "verify", 00:19:33.173 "status": "finished", 00:19:33.173 "verify_range": { 00:19:33.173 "start": 0, 00:19:33.173 "length": 16384 00:19:33.173 }, 00:19:33.173 "queue_depth": 128, 00:19:33.173 "io_size": 4096, 00:19:33.173 "runtime": 1.007762, 00:19:33.173 "iops": 8335.301390606115, 00:19:33.173 "mibps": 32.55977105705514, 00:19:33.173 "io_failed": 0, 00:19:33.173 "io_timeout": 0, 00:19:33.173 "avg_latency_us": 15275.088678787879, 00:19:33.173 "min_latency_us": 1131.9854545454546, 00:19:33.173 "max_latency_us": 13345.512727272728 00:19:33.173 } 00:19:33.173 ], 00:19:33.173 "core_count": 1 00:19:33.173 } 00:19:33.173 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.173 [2024-12-16 14:35:18.530024] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:33.173 [2024-12-16 14:35:18.530126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91874 ] 00:19:33.173 [2024-12-16 14:35:18.676257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.173 [2024-12-16 14:35:18.694806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.173 [2024-12-16 14:35:18.721686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.173 [2024-12-16 14:35:20.886134] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:33.173 [2024-12-16 14:35:20.886710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.173 [2024-12-16 14:35:20.886878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.173 [2024-12-16 14:35:20.886965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.173 [2024-12-16 14:35:20.887049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.173 [2024-12-16 14:35:20.887115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.173 [2024-12-16 14:35:20.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.173 [2024-12-16 14:35:20.887275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.173 [2024-12-16 14:35:20.887340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.173 [2024-12-16 14:35:20.887401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:33.173 [2024-12-16 14:35:20.887526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:33.173 [2024-12-16 14:35:20.887634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30a90 (9): Bad file descriptor 00:19:33.173 [2024-12-16 14:35:20.890391] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:33.173 Running I/O for 1 seconds... 00:19:33.173 8272.00 IOPS, 32.31 MiB/s 00:19:33.173 Latency(us) 00:19:33.173 [2024-12-16T14:35:25.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.173 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.173 Verification LBA range: start 0x0 length 0x4000 00:19:33.173 NVMe0n1 : 1.01 8335.30 32.56 0.00 0.00 15275.09 1131.99 13345.51 00:19:33.173 [2024-12-16T14:35:25.373Z] =================================================================================================================== 00:19:33.173 [2024-12-16T14:35:25.373Z] Total : 8335.30 32.56 0.00 0.00 15275.09 1131.99 13345.51 00:19:33.173 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.173 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:33.431 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:33.689 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:33.689 14:35:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.948 14:35:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.514 14:35:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 91874 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91874 ']' 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91874 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91874 00:19:37.799 killing process with pid 91874 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91874' 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91874 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91874 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:37.799 14:35:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.058 rmmod nvme_tcp 00:19:38.058 rmmod nvme_fabrics 00:19:38.058 rmmod nvme_keyring 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 91635 ']' 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 91635 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91635 ']' 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91635 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.058 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91635 00:19:38.317 killing process with pid 91635 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91635' 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91635 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91635 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:38.317 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:38.577 ************************************ 00:19:38.577 END TEST nvmf_failover 00:19:38.577 ************************************ 00:19:38.577 00:19:38.577 real 0m31.051s 00:19:38.577 user 2m0.193s 00:19:38.577 sys 0m5.289s 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.577 ************************************ 00:19:38.577 START TEST nvmf_host_discovery 00:19:38.577 ************************************ 00:19:38.577 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:38.837 * Looking for test storage... 00:19:38.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.837 --rc genhtml_branch_coverage=1 00:19:38.837 --rc genhtml_function_coverage=1 00:19:38.837 --rc genhtml_legend=1 00:19:38.837 --rc geninfo_all_blocks=1 00:19:38.837 --rc geninfo_unexecuted_blocks=1 00:19:38.837 00:19:38.837 ' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.837 --rc genhtml_branch_coverage=1 00:19:38.837 --rc genhtml_function_coverage=1 00:19:38.837 --rc genhtml_legend=1 00:19:38.837 --rc geninfo_all_blocks=1 00:19:38.837 --rc geninfo_unexecuted_blocks=1 00:19:38.837 00:19:38.837 ' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.837 --rc genhtml_branch_coverage=1 00:19:38.837 --rc genhtml_function_coverage=1 00:19:38.837 --rc genhtml_legend=1 00:19:38.837 --rc geninfo_all_blocks=1 00:19:38.837 --rc geninfo_unexecuted_blocks=1 00:19:38.837 00:19:38.837 ' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.837 --rc genhtml_branch_coverage=1 00:19:38.837 --rc genhtml_function_coverage=1 00:19:38.837 --rc genhtml_legend=1 00:19:38.837 --rc geninfo_all_blocks=1 00:19:38.837 --rc geninfo_unexecuted_blocks=1 00:19:38.837 00:19:38.837 ' 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.837 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:38.838 Cannot find device "nvmf_init_br" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:38.838 Cannot find device "nvmf_init_br2" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:38.838 Cannot find device "nvmf_tgt_br" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.838 Cannot find device "nvmf_tgt_br2" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:38.838 Cannot find device "nvmf_init_br" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:38.838 Cannot find device "nvmf_init_br2" 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:38.838 14:35:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:38.838 Cannot find device "nvmf_tgt_br" 00:19:38.838 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:38.838 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:38.838 Cannot find device "nvmf_tgt_br2" 00:19:38.838 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:38.838 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:39.098 Cannot find device "nvmf_br" 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:39.098 Cannot find device "nvmf_init_if" 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:39.098 Cannot find device "nvmf_init_if2" 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.098 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:39.357 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:39.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:39.357 00:19:39.357 --- 10.0.0.3 ping statistics --- 00:19:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.357 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:39.357 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:39.357 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:39.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:19:39.357 00:19:39.357 --- 10.0.0.4 ping statistics --- 00:19:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.357 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:39.357 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:39.357 00:19:39.357 --- 10.0.0.1 ping statistics --- 00:19:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.357 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:39.357 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:39.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:19:39.357 00:19:39.357 --- 10.0.0.2 ping statistics --- 00:19:39.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.357 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:39.357 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=92272 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 92272 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92272 ']' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.358 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.358 [2024-12-16 14:35:31.406204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:39.358 [2024-12-16 14:35:31.407000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.358 [2024-12-16 14:35:31.549920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.617 [2024-12-16 14:35:31.568832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.617 [2024-12-16 14:35:31.568877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.617 [2024-12-16 14:35:31.568887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.617 [2024-12-16 14:35:31.568893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.617 [2024-12-16 14:35:31.568898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.617 [2024-12-16 14:35:31.569138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.617 [2024-12-16 14:35:31.595985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 [2024-12-16 14:35:31.711446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 [2024-12-16 14:35:31.719640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 null0 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 null1 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=92291 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 92291 /tmp/host.sock 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 92291 ']' 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.617 14:35:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.617 [2024-12-16 14:35:31.806536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:39.617 [2024-12-16 14:35:31.806836] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92291 ] 00:19:39.908 [2024-12-16 14:35:31.958872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.908 [2024-12-16 14:35:31.983037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.908 [2024-12-16 14:35:32.016025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.908 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.167 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.168 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 [2024-12-16 14:35:32.459773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:40.686 14:35:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:40.945 [2024-12-16 14:35:33.104216] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:40.945 [2024-12-16 14:35:33.104391] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:40.945 [2024-12-16 14:35:33.104423] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:40.945 [2024-12-16 14:35:33.110249] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:41.203 [2024-12-16 14:35:33.164591] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:41.203 [2024-12-16 14:35:33.165359] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd11390:1 started. 00:19:41.203 [2024-12-16 14:35:33.167112] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:41.203 [2024-12-16 14:35:33.167148] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:41.203 [2024-12-16 14:35:33.172948] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd11390 was disconnected and freed. delete nvme_qpair. 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:41.770 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.771 [2024-12-16 14:35:33.936015] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xcfb330:1 started. 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.771 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.771 [2024-12-16 14:35:33.943423] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xcfb330 was disconnected and freed. delete nvme_qpair. 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.030 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.031 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.031 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.031 14:35:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.031 [2024-12-16 14:35:34.057249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:42.031 [2024-12-16 14:35:34.058139] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:42.031 [2024-12-16 14:35:34.058306] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.031 [2024-12-16 14:35:34.064145] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.031 [2024-12-16 14:35:34.129015] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:42.031 [2024-12-16 14:35:34.129208] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:42.031 [2024-12-16 14:35:34.129223] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:42.031 [2024-12-16 14:35:34.129229] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.031 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 [2024-12-16 14:35:34.290405] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:42.291 [2024-12-16 14:35:34.290432] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.291 [2024-12-16 14:35:34.294707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.291 [2024-12-16 14:35:34.294741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.291 [2024-12-16 14:35:34.294754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.291 [2024-12-16 14:35:34.294763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.291 [2024-12-16 14:35:34.294773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.291 [2024-12-16 14:35:34.294807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.291 [2024-12-16 14:35:34.294833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.291 [2024-12-16 14:35:34.294852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.291 [2024-12-16 14:35:34.294861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcecb00 is same with the state(6) to be set 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.291 [2024-12-16 14:35:34.296418] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:42.291 [2024-12-16 14:35:34.296453] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:42.291 [2024-12-16 14:35:34.296517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcecb00 (9): Bad file descriptor 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.291 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.550 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.551 14:35:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 [2024-12-16 14:35:35.705042] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:43.925 [2024-12-16 14:35:35.705065] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:43.925 [2024-12-16 14:35:35.705081] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.925 [2024-12-16 14:35:35.711075] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:43.925 [2024-12-16 14:35:35.769376] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:43.925 [2024-12-16 14:35:35.770194] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xcdd390:1 started. 00:19:43.925 [2024-12-16 14:35:35.771818] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:43.925 [2024-12-16 14:35:35.771846] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 [2024-12-16 14:35:35.773895] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xcdd390 was disconnected and freed. delete nvme_qpair. 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 request: 00:19:43.925 { 00:19:43.925 "name": "nvme", 00:19:43.925 "trtype": "tcp", 00:19:43.925 "traddr": "10.0.0.3", 00:19:43.925 "adrfam": "ipv4", 00:19:43.925 "trsvcid": "8009", 00:19:43.925 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:43.925 "wait_for_attach": true, 00:19:43.925 "method": "bdev_nvme_start_discovery", 00:19:43.925 "req_id": 1 00:19:43.925 } 00:19:43.925 Got JSON-RPC error response 00:19:43.925 response: 00:19:43.925 { 00:19:43.925 "code": -17, 00:19:43.925 "message": "File exists" 00:19:43.925 } 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 request: 00:19:43.925 { 00:19:43.925 "name": "nvme_second", 00:19:43.925 "trtype": "tcp", 00:19:43.925 "traddr": "10.0.0.3", 00:19:43.925 "adrfam": "ipv4", 00:19:43.925 "trsvcid": "8009", 00:19:43.925 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:43.925 "wait_for_attach": true, 00:19:43.925 "method": "bdev_nvme_start_discovery", 00:19:43.925 "req_id": 1 00:19:43.925 } 00:19:43.925 Got JSON-RPC error response 00:19:43.925 response: 00:19:43.925 { 00:19:43.925 "code": -17, 00:19:43.925 "message": "File exists" 00:19:43.925 } 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.925 14:35:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.925 14:35:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.861 [2024-12-16 14:35:37.048269] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.861 [2024-12-16 14:35:37.048331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5be0 with addr=10.0.0.3, port=8010 00:19:44.861 [2024-12-16 14:35:37.048348] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:44.861 [2024-12-16 14:35:37.048356] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:44.861 [2024-12-16 14:35:37.048363] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:46.235 [2024-12-16 14:35:38.048251] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.235 [2024-12-16 14:35:38.048308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf5be0 with addr=10.0.0.3, port=8010 00:19:46.235 [2024-12-16 14:35:38.048324] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:46.235 [2024-12-16 14:35:38.048332] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:46.235 [2024-12-16 14:35:38.048339] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:47.171 [2024-12-16 14:35:39.048177] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:47.171 request: 00:19:47.171 { 00:19:47.171 "name": "nvme_second", 00:19:47.171 "trtype": "tcp", 00:19:47.171 "traddr": "10.0.0.3", 00:19:47.171 "adrfam": "ipv4", 00:19:47.171 "trsvcid": "8010", 00:19:47.171 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:47.171 "wait_for_attach": false, 00:19:47.171 "attach_timeout_ms": 3000, 00:19:47.171 "method": "bdev_nvme_start_discovery", 00:19:47.171 "req_id": 1 00:19:47.171 } 00:19:47.171 Got JSON-RPC error response 00:19:47.171 response: 00:19:47.171 { 00:19:47.171 "code": -110, 00:19:47.171 "message": "Connection timed out" 00:19:47.171 } 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 92291 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.171 rmmod nvme_tcp 00:19:47.171 rmmod nvme_fabrics 00:19:47.171 rmmod nvme_keyring 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 92272 ']' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 92272 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 92272 ']' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 92272 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92272 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.171 killing process with pid 92272 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92272' 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 92272 00:19:47.171 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 92272 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.430 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:47.690 00:19:47.690 real 0m8.930s 00:19:47.690 user 0m17.020s 00:19:47.690 sys 0m1.890s 00:19:47.690 ************************************ 00:19:47.690 END TEST nvmf_host_discovery 00:19:47.690 ************************************ 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.690 ************************************ 00:19:47.690 START TEST nvmf_host_multipath_status 00:19:47.690 ************************************ 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:47.690 * Looking for test storage... 00:19:47.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:47.690 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:47.691 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:47.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.950 --rc genhtml_branch_coverage=1 00:19:47.950 --rc genhtml_function_coverage=1 00:19:47.950 --rc genhtml_legend=1 00:19:47.950 --rc geninfo_all_blocks=1 00:19:47.950 --rc geninfo_unexecuted_blocks=1 00:19:47.950 00:19:47.950 ' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:47.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.950 --rc genhtml_branch_coverage=1 00:19:47.950 --rc genhtml_function_coverage=1 00:19:47.950 --rc genhtml_legend=1 00:19:47.950 --rc geninfo_all_blocks=1 00:19:47.950 --rc geninfo_unexecuted_blocks=1 00:19:47.950 00:19:47.950 ' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:47.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.950 --rc genhtml_branch_coverage=1 00:19:47.950 --rc genhtml_function_coverage=1 00:19:47.950 --rc genhtml_legend=1 00:19:47.950 --rc geninfo_all_blocks=1 00:19:47.950 --rc geninfo_unexecuted_blocks=1 00:19:47.950 00:19:47.950 ' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:47.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.950 --rc genhtml_branch_coverage=1 00:19:47.950 --rc genhtml_function_coverage=1 00:19:47.950 --rc genhtml_legend=1 00:19:47.950 --rc geninfo_all_blocks=1 00:19:47.950 --rc geninfo_unexecuted_blocks=1 00:19:47.950 00:19:47.950 ' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.950 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.951 Cannot find device "nvmf_init_br" 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.951 Cannot find device "nvmf_init_br2" 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.951 Cannot find device "nvmf_tgt_br" 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.951 Cannot find device "nvmf_tgt_br2" 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:47.951 Cannot find device "nvmf_init_br" 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:47.951 14:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:47.951 Cannot find device "nvmf_init_br2" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:47.951 Cannot find device "nvmf_tgt_br" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:47.951 Cannot find device "nvmf_tgt_br2" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:47.951 Cannot find device "nvmf_br" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:47.951 Cannot find device "nvmf_init_if" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:47.951 Cannot find device "nvmf_init_if2" 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:47.951 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:48.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:48.210 00:19:48.210 --- 10.0.0.3 ping statistics --- 00:19:48.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.210 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:48.210 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:48.210 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:19:48.210 00:19:48.210 --- 10.0.0.4 ping statistics --- 00:19:48.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.210 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:48.210 00:19:48.210 --- 10.0.0.1 ping statistics --- 00:19:48.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.210 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:48.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:48.210 00:19:48.210 --- 10.0.0.2 ping statistics --- 00:19:48.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.210 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:48.210 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=92785 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 92785 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 92785 ']' 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.211 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.211 [2024-12-16 14:35:40.396121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:48.211 [2024-12-16 14:35:40.396215] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.469 [2024-12-16 14:35:40.546412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.469 [2024-12-16 14:35:40.570464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.469 [2024-12-16 14:35:40.570529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.469 [2024-12-16 14:35:40.570542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.469 [2024-12-16 14:35:40.570552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.469 [2024-12-16 14:35:40.570561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.469 [2024-12-16 14:35:40.571475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.469 [2024-12-16 14:35:40.571483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.469 [2024-12-16 14:35:40.606261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:48.469 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.469 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:48.469 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.469 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.469 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:48.728 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.728 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=92785 00:19:48.728 14:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:48.986 [2024-12-16 14:35:40.997830] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.986 14:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:49.244 Malloc0 00:19:49.245 14:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:49.503 14:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:49.762 14:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.020 [2024-12-16 14:35:42.054817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.020 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:50.279 [2024-12-16 14:35:42.282882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=92829 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 92829 /var/tmp/bdevperf.sock 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 92829 ']' 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.279 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.537 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.537 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:50.537 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:50.796 14:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:51.054 Nvme0n1 00:19:51.054 14:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:51.621 Nvme0n1 00:19:51.621 14:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:51.621 14:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.530 14:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:53.530 14:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:53.789 14:35:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:54.047 14:35:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:54.983 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:54.983 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:54.983 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.983 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.242 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.242 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:55.242 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.242 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.500 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.500 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.500 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.500 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.759 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.759 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.759 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.759 14:35:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:56.017 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.018 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:56.018 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.018 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.276 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.276 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:56.276 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.276 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.535 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.535 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:56.535 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:56.794 14:35:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:57.053 14:35:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.430 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.688 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.688 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.688 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.688 14:35:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.947 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.947 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.947 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.947 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:59.206 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.206 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:59.206 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.206 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.465 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.465 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:59.465 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.465 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.723 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.723 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:59.723 14:35:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:00.290 14:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:00.290 14:35:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.722 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.981 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.981 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.981 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.981 14:35:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.239 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.239 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:02.239 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.239 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.497 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.497 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:02.497 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.497 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:02.754 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.754 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:02.754 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.754 14:35:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:03.012 14:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.012 14:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:03.013 14:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:03.271 14:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:03.528 14:35:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:04.463 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:04.463 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:04.463 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.463 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.722 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.722 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:04.722 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.722 14:35:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:04.981 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.981 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:04.981 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.981 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:05.239 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.239 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:05.239 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.239 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.498 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.498 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:05.498 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:05.498 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.756 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.756 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:05.756 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.756 14:35:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:06.015 14:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.015 14:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:06.015 14:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:06.274 14:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:06.532 14:35:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:07.467 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:07.467 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:07.467 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.467 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:07.725 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.725 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:07.725 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.725 14:35:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:07.983 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.983 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:07.983 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.983 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:08.241 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.241 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:08.241 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.241 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:08.808 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.808 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:08.808 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.808 14:36:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:09.066 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:09.323 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:09.580 14:36:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:10.514 14:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:10.514 14:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:10.514 14:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:10.514 14:36:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.081 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.081 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:11.081 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:11.081 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.340 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.340 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:11.340 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.340 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:11.598 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.598 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:11.598 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:11.598 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.856 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.856 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:11.857 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.857 14:36:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:12.115 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.115 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:12.115 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.115 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:12.373 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.373 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:12.632 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:12.632 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:12.890 14:36:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:13.148 14:36:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:14.084 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:14.084 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:14.084 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.084 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:14.342 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.342 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:14.342 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.342 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:14.601 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.601 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:14.601 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.601 14:36:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:14.860 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.860 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:14.860 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.860 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:15.118 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.118 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:15.118 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:15.118 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:15.686 14:36:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:15.944 14:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:16.203 14:36:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:17.138 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:17.138 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:17.138 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.138 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.705 14:36:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:17.963 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.963 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:17.963 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:17.963 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.222 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.222 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:18.222 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.222 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:18.480 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.480 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:18.480 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.480 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:18.738 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.738 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:18.738 14:36:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:19.008 14:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:19.268 14:36:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:20.203 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:20.203 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:20.203 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.203 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.793 14:36:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:21.065 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.065 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:21.065 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.065 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:21.324 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.324 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:21.324 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:21.324 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.582 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.582 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:21.582 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.582 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:21.840 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.840 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:21.840 14:36:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:22.099 14:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:22.357 14:36:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.733 14:36:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:23.991 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:23.991 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:23.991 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.991 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:24.250 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.250 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:24.250 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.250 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:24.508 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.508 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:24.508 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.508 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:24.767 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.767 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:24.767 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.767 14:36:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 92829 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 92829 ']' 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 92829 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92829 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.025 killing process with pid 92829 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92829' 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 92829 00:20:25.025 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 92829 00:20:25.025 { 00:20:25.025 "results": [ 00:20:25.025 { 00:20:25.025 "job": "Nvme0n1", 00:20:25.025 "core_mask": "0x4", 00:20:25.025 "workload": "verify", 00:20:25.025 "status": "terminated", 00:20:25.025 "verify_range": { 00:20:25.025 "start": 0, 00:20:25.025 "length": 16384 00:20:25.025 }, 00:20:25.025 "queue_depth": 128, 00:20:25.025 "io_size": 4096, 00:20:25.025 "runtime": 33.480288, 00:20:25.025 "iops": 9418.61670962926, 00:20:25.025 "mibps": 36.791471521989294, 00:20:25.025 "io_failed": 0, 00:20:25.025 "io_timeout": 0, 00:20:25.025 "avg_latency_us": 13561.826481368622, 00:20:25.025 "min_latency_us": 990.4872727272727, 00:20:25.026 "max_latency_us": 4026531.84 00:20:25.026 } 00:20:25.026 ], 00:20:25.026 "core_count": 1 00:20:25.026 } 00:20:25.292 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 92829 00:20:25.292 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:25.292 [2024-12-16 14:35:42.358283] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:25.292 [2024-12-16 14:35:42.358397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92829 ] 00:20:25.292 [2024-12-16 14:35:42.510161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.292 [2024-12-16 14:35:42.534380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.292 [2024-12-16 14:35:42.568089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:25.292 Running I/O for 90 seconds... 00:20:25.292 8084.00 IOPS, 31.58 MiB/s [2024-12-16T14:36:17.492Z] 8074.50 IOPS, 31.54 MiB/s [2024-12-16T14:36:17.492Z] 8071.00 IOPS, 31.53 MiB/s [2024-12-16T14:36:17.492Z] 8069.00 IOPS, 31.52 MiB/s [2024-12-16T14:36:17.492Z] 8020.00 IOPS, 31.33 MiB/s [2024-12-16T14:36:17.492Z] 8031.83 IOPS, 31.37 MiB/s [2024-12-16T14:36:17.492Z] 8067.71 IOPS, 31.51 MiB/s [2024-12-16T14:36:17.492Z] 8355.25 IOPS, 32.64 MiB/s [2024-12-16T14:36:17.492Z] 8605.56 IOPS, 33.62 MiB/s [2024-12-16T14:36:17.492Z] 8830.60 IOPS, 34.49 MiB/s [2024-12-16T14:36:17.492Z] 8984.91 IOPS, 35.10 MiB/s [2024-12-16T14:36:17.492Z] 9131.50 IOPS, 35.67 MiB/s [2024-12-16T14:36:17.492Z] 9270.31 IOPS, 36.21 MiB/s [2024-12-16T14:36:17.492Z] 9361.29 IOPS, 36.57 MiB/s [2024-12-16T14:36:17.492Z] [2024-12-16 14:35:58.355283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.292 [2024-12-16 14:35:58.355646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.355965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.355984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.356013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.356032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.356046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.356065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.356079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.356099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.356113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.356143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.292 [2024-12-16 14:35:58.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.292 [2024-12-16 14:35:58.356180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.356820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.356970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.356990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.293 [2024-12-16 14:35:58.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.293 [2024-12-16 14:35:58.357637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.293 [2024-12-16 14:35:58.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.357976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.357991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.358303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.294 [2024-12-16 14:35:58.358963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.358985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.294 [2024-12-16 14:35:58.359221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.294 [2024-12-16 14:35:58.359241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.359579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.359595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:35:58.360313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.360983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.360999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.361026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:35:58.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:35:58.361082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.295 9228.93 IOPS, 36.05 MiB/s [2024-12-16T14:36:17.495Z] 8652.12 IOPS, 33.80 MiB/s [2024-12-16T14:36:17.495Z] 8143.18 IOPS, 31.81 MiB/s [2024-12-16T14:36:17.495Z] 7690.78 IOPS, 30.04 MiB/s [2024-12-16T14:36:17.495Z] 7446.37 IOPS, 29.09 MiB/s [2024-12-16T14:36:17.495Z] 7584.85 IOPS, 29.63 MiB/s [2024-12-16T14:36:17.495Z] 7723.86 IOPS, 30.17 MiB/s [2024-12-16T14:36:17.495Z] 7965.50 IOPS, 31.12 MiB/s [2024-12-16T14:36:17.495Z] 8238.78 IOPS, 32.18 MiB/s [2024-12-16T14:36:17.495Z] 8477.50 IOPS, 33.12 MiB/s [2024-12-16T14:36:17.495Z] 8622.24 IOPS, 33.68 MiB/s [2024-12-16T14:36:17.495Z] 8703.23 IOPS, 34.00 MiB/s [2024-12-16T14:36:17.495Z] 8768.44 IOPS, 34.25 MiB/s [2024-12-16T14:36:17.495Z] 8872.32 IOPS, 34.66 MiB/s [2024-12-16T14:36:17.495Z] 9048.59 IOPS, 35.35 MiB/s [2024-12-16T14:36:17.495Z] 9199.80 IOPS, 35.94 MiB/s [2024-12-16T14:36:17.495Z] [2024-12-16 14:36:14.468860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.468921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.468971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.295 [2024-12-16 14:36:14.469149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.295 [2024-12-16 14:36:14.469315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.295 [2024-12-16 14:36:14.469334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.469976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.469996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.296 [2024-12-16 14:36:14.470530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.296 [2024-12-16 14:36:14.470586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.296 [2024-12-16 14:36:14.470602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.470639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.470674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.470709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.470745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.470780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.470849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.470912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.470951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.470972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.470987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.471640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.471804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.471820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.473478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.473513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.297 [2024-12-16 14:36:14.473549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.297 [2024-12-16 14:36:14.473672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.297 [2024-12-16 14:36:14.473687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.473721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.473777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.473825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.473860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.473895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.473929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.473964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.473984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.473999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.474882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.474941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.474957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.476195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.476236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.476271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.476306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.476340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.298 [2024-12-16 14:36:14.476375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.476409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.298 [2024-12-16 14:36:14.476455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.298 [2024-12-16 14:36:14.476472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.476952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.476972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.476993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.477574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.477595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.477610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.479280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.479358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.479392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.479438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.479491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.479529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.299 [2024-12-16 14:36:14.479563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.299 [2024-12-16 14:36:14.479583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.299 [2024-12-16 14:36:14.479597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.479837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.479920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.479976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.479997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.300 [2024-12-16 14:36:14.480873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.480893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.300 [2024-12-16 14:36:14.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:25.300 [2024-12-16 14:36:14.482402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.482917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.482955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.482984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.483670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.483693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.483709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.484591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.484634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.484669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.484718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.484753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.484787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.301 [2024-12-16 14:36:14.484821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.484856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.301 [2024-12-16 14:36:14.484890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:25.301 [2024-12-16 14:36:14.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.484925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.484945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.484959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.484979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.484993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.485013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.485028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.486901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.486976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.486998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.487013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.487227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.487276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.487354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.487374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.487389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.488375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.302 [2024-12-16 14:36:14.488433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.488488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.488527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.488563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.488613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:25.302 [2024-12-16 14:36:14.488633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.302 [2024-12-16 14:36:14.488647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.488799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.488905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.488974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.488998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:25.303 [2024-12-16 14:36:14.489368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:25.303 [2024-12-16 14:36:14.489469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.303 [2024-12-16 14:36:14.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:25.303 9329.61 IOPS, 36.44 MiB/s [2024-12-16T14:36:17.503Z] 9374.31 IOPS, 36.62 MiB/s [2024-12-16T14:36:17.503Z] 9405.39 IOPS, 36.74 MiB/s [2024-12-16T14:36:17.503Z] Received shutdown signal, test time was about 33.481192 seconds 00:20:25.303 00:20:25.303 Latency(us) 00:20:25.303 [2024-12-16T14:36:17.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.303 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.303 Verification LBA range: start 0x0 length 0x4000 00:20:25.303 Nvme0n1 : 33.48 9418.62 36.79 0.00 0.00 13561.83 990.49 4026531.84 00:20:25.303 [2024-12-16T14:36:17.503Z] =================================================================================================================== 00:20:25.303 [2024-12-16T14:36:17.503Z] Total : 9418.62 36.79 0.00 0.00 13561.83 990.49 4026531.84 00:20:25.303 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.562 rmmod nvme_tcp 00:20:25.562 rmmod nvme_fabrics 00:20:25.562 rmmod nvme_keyring 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 92785 ']' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 92785 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 92785 ']' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 92785 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92785 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.562 killing process with pid 92785 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92785' 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 92785 00:20:25.562 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 92785 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.821 14:36:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.821 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.821 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.821 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.821 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:26.081 00:20:26.081 real 0m38.343s 00:20:26.081 user 2m4.629s 00:20:26.081 sys 0m11.076s 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.081 ************************************ 00:20:26.081 END TEST nvmf_host_multipath_status 00:20:26.081 ************************************ 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.081 ************************************ 00:20:26.081 START TEST nvmf_discovery_remove_ifc 00:20:26.081 ************************************ 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:26.081 * Looking for test storage... 00:20:26.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:26.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.081 --rc genhtml_branch_coverage=1 00:20:26.081 --rc genhtml_function_coverage=1 00:20:26.081 --rc genhtml_legend=1 00:20:26.081 --rc geninfo_all_blocks=1 00:20:26.081 --rc geninfo_unexecuted_blocks=1 00:20:26.081 00:20:26.081 ' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:26.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.081 --rc genhtml_branch_coverage=1 00:20:26.081 --rc genhtml_function_coverage=1 00:20:26.081 --rc genhtml_legend=1 00:20:26.081 --rc geninfo_all_blocks=1 00:20:26.081 --rc geninfo_unexecuted_blocks=1 00:20:26.081 00:20:26.081 ' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:26.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.081 --rc genhtml_branch_coverage=1 00:20:26.081 --rc genhtml_function_coverage=1 00:20:26.081 --rc genhtml_legend=1 00:20:26.081 --rc geninfo_all_blocks=1 00:20:26.081 --rc geninfo_unexecuted_blocks=1 00:20:26.081 00:20:26.081 ' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:26.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.081 --rc genhtml_branch_coverage=1 00:20:26.081 --rc genhtml_function_coverage=1 00:20:26.081 --rc genhtml_legend=1 00:20:26.081 --rc geninfo_all_blocks=1 00:20:26.081 --rc geninfo_unexecuted_blocks=1 00:20:26.081 00:20:26.081 ' 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.081 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.341 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.341 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:26.342 Cannot find device "nvmf_init_br" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:26.342 Cannot find device "nvmf_init_br2" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:26.342 Cannot find device "nvmf_tgt_br" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.342 Cannot find device "nvmf_tgt_br2" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:26.342 Cannot find device "nvmf_init_br" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:26.342 Cannot find device "nvmf_init_br2" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:26.342 Cannot find device "nvmf_tgt_br" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:26.342 Cannot find device "nvmf_tgt_br2" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:26.342 Cannot find device "nvmf_br" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:26.342 Cannot find device "nvmf_init_if" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:26.342 Cannot find device "nvmf_init_if2" 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:26.342 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:26.601 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:26.602 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.602 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:20:26.602 00:20:26.602 --- 10.0.0.3 ping statistics --- 00:20:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.602 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:26.602 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:26.602 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:26.602 00:20:26.602 --- 10.0.0.4 ping statistics --- 00:20:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.602 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:26.602 00:20:26.602 --- 10.0.0.1 ping statistics --- 00:20:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.602 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:26.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:20:26.602 00:20:26.602 --- 10.0.0.2 ping statistics --- 00:20:26.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.602 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=93665 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 93665 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93665 ']' 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.602 14:36:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.602 [2024-12-16 14:36:18.763539] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:26.602 [2024-12-16 14:36:18.763630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.861 [2024-12-16 14:36:18.917568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.861 [2024-12-16 14:36:18.940882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.861 [2024-12-16 14:36:18.940944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.861 [2024-12-16 14:36:18.940958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.861 [2024-12-16 14:36:18.940968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.861 [2024-12-16 14:36:18.940977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.861 [2024-12-16 14:36:18.941324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.861 [2024-12-16 14:36:18.976692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.861 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.861 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:26.861 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.861 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.861 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.120 [2024-12-16 14:36:19.081122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.120 [2024-12-16 14:36:19.089280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:27.120 null0 00:20:27.120 [2024-12-16 14:36:19.121188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93684 00:20:27.120 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93684 /tmp/host.sock 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 93684 ']' 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.121 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.121 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.121 [2024-12-16 14:36:19.201033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:20:27.121 [2024-12-16 14:36:19.201124] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93684 ] 00:20:27.380 [2024-12-16 14:36:19.349730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.380 [2024-12-16 14:36:19.368370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.380 [2024-12-16 14:36:19.508055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.380 14:36:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.755 [2024-12-16 14:36:20.543912] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:28.755 [2024-12-16 14:36:20.543935] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:28.755 [2024-12-16 14:36:20.543950] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:28.755 [2024-12-16 14:36:20.549944] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:28.755 [2024-12-16 14:36:20.604228] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:28.755 [2024-12-16 14:36:20.605086] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1baa0c0:1 started. 00:20:28.755 [2024-12-16 14:36:20.606569] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:28.755 [2024-12-16 14:36:20.606616] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:28.755 [2024-12-16 14:36:20.606642] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:28.755 [2024-12-16 14:36:20.606657] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:28.755 [2024-12-16 14:36:20.606679] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.755 [2024-12-16 14:36:20.612636] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.755 baa0c0 was disconnected and freed. delete nvme_qpair. 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:28.755 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:28.756 14:36:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:29.691 14:36:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.627 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.885 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.885 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.885 14:36:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:31.821 14:36:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:32.757 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.016 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.016 14:36:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.950 14:36:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.950 14:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.950 [2024-12-16 14:36:26.035478] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:33.950 [2024-12-16 14:36:26.035747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-12-16 14:36:26.035914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-12-16 14:36:26.035932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.950 [2024-12-16 14:36:26.035941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.950 [2024-12-16 14:36:26.035950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.951 [2024-12-16 14:36:26.035959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.951 [2024-12-16 14:36:26.035968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.951 [2024-12-16 14:36:26.035976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.951 [2024-12-16 14:36:26.035985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.951 [2024-12-16 14:36:26.035994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.951 [2024-12-16 14:36:26.036002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d340 is same with the state(6) to be set 00:20:33.951 [2024-12-16 14:36:26.045487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d340 (9): Bad file descriptor 00:20:33.951 14:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.951 14:36:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.951 [2024-12-16 14:36:26.055534] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:33.951 [2024-12-16 14:36:26.055559] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:33.951 [2024-12-16 14:36:26.055572] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:33.951 [2024-12-16 14:36:26.055581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:33.951 [2024-12-16 14:36:26.055683] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:34.887 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.887 [2024-12-16 14:36:27.082538] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:34.887 [2024-12-16 14:36:27.082890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7d340 with addr=10.0.0.3, port=4420 00:20:34.887 [2024-12-16 14:36:27.083171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7d340 is same with the state(6) to be set 00:20:34.887 [2024-12-16 14:36:27.083288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d340 (9): Bad file descriptor 00:20:34.887 [2024-12-16 14:36:27.084240] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:34.887 [2024-12-16 14:36:27.084344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:34.887 [2024-12-16 14:36:27.084368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:34.887 [2024-12-16 14:36:27.084415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:34.887 [2024-12-16 14:36:27.084479] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:34.887 [2024-12-16 14:36:27.084494] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:34.887 [2024-12-16 14:36:27.084504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:34.887 [2024-12-16 14:36:27.084523] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:34.887 [2024-12-16 14:36:27.084535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.150 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.150 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:35.150 14:36:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:36.083 [2024-12-16 14:36:28.084618] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:36.083 [2024-12-16 14:36:28.084646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:36.083 [2024-12-16 14:36:28.084669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:36.084 [2024-12-16 14:36:28.084694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:36.084 [2024-12-16 14:36:28.084702] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:36.084 [2024-12-16 14:36:28.084710] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:36.084 [2024-12-16 14:36:28.084716] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:36.084 [2024-12-16 14:36:28.084720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:36.084 [2024-12-16 14:36:28.084776] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:36.084 [2024-12-16 14:36:28.084815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.084 [2024-12-16 14:36:28.084830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.084 [2024-12-16 14:36:28.084843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.084 [2024-12-16 14:36:28.084851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.084 [2024-12-16 14:36:28.084860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.084 [2024-12-16 14:36:28.084867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.084 [2024-12-16 14:36:28.084876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.084 [2024-12-16 14:36:28.084883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.084 [2024-12-16 14:36:28.084892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:36.084 [2024-12-16 14:36:28.084899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:36.084 [2024-12-16 14:36:28.084907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:36.084 [2024-12-16 14:36:28.085137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b73960 (9): Bad file descriptor 00:20:36.084 [2024-12-16 14:36:28.086152] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:36.084 [2024-12-16 14:36:28.086179] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:36.084 14:36:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:37.458 14:36:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:38.025 [2024-12-16 14:36:30.098419] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:38.025 [2024-12-16 14:36:30.098449] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:38.025 [2024-12-16 14:36:30.098511] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:38.025 [2024-12-16 14:36:30.104477] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:38.025 [2024-12-16 14:36:30.158751] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:38.025 [2024-12-16 14:36:30.159888] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1b604c0:1 started. 00:20:38.025 [2024-12-16 14:36:30.161079] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:38.025 [2024-12-16 14:36:30.161133] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:38.025 [2024-12-16 14:36:30.161170] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:38.025 [2024-12-16 14:36:30.161200] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:38.025 [2024-12-16 14:36:30.161211] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:38.025 [2024-12-16 14:36:30.166793] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1b604c0 was disconnected and freed. delete nvme_qpair. 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93684 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93684 ']' 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93684 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93684 00:20:38.284 killing process with pid 93684 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93684' 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93684 00:20:38.284 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93684 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.542 rmmod nvme_tcp 00:20:38.542 rmmod nvme_fabrics 00:20:38.542 rmmod nvme_keyring 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 93665 ']' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 93665 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 93665 ']' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 93665 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93665 00:20:38.542 killing process with pid 93665 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93665' 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 93665 00:20:38.542 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 93665 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.801 14:36:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:39.059 00:20:39.059 real 0m12.941s 00:20:39.059 user 0m22.171s 00:20:39.059 sys 0m2.367s 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.059 ************************************ 00:20:39.059 END TEST nvmf_discovery_remove_ifc 00:20:39.059 ************************************ 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.059 ************************************ 00:20:39.059 START TEST nvmf_identify_kernel_target 00:20:39.059 ************************************ 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:39.059 * Looking for test storage... 00:20:39.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.059 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.318 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.318 --rc genhtml_branch_coverage=1 00:20:39.318 --rc genhtml_function_coverage=1 00:20:39.318 --rc genhtml_legend=1 00:20:39.318 --rc geninfo_all_blocks=1 00:20:39.318 --rc geninfo_unexecuted_blocks=1 00:20:39.318 00:20:39.318 ' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.319 --rc genhtml_branch_coverage=1 00:20:39.319 --rc genhtml_function_coverage=1 00:20:39.319 --rc genhtml_legend=1 00:20:39.319 --rc geninfo_all_blocks=1 00:20:39.319 --rc geninfo_unexecuted_blocks=1 00:20:39.319 00:20:39.319 ' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.319 --rc genhtml_branch_coverage=1 00:20:39.319 --rc genhtml_function_coverage=1 00:20:39.319 --rc genhtml_legend=1 00:20:39.319 --rc geninfo_all_blocks=1 00:20:39.319 --rc geninfo_unexecuted_blocks=1 00:20:39.319 00:20:39.319 ' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.319 --rc genhtml_branch_coverage=1 00:20:39.319 --rc genhtml_function_coverage=1 00:20:39.319 --rc genhtml_legend=1 00:20:39.319 --rc geninfo_all_blocks=1 00:20:39.319 --rc geninfo_unexecuted_blocks=1 00:20:39.319 00:20:39.319 ' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:39.319 Cannot find device "nvmf_init_br" 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:39.319 Cannot find device "nvmf_init_br2" 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:39.319 Cannot find device "nvmf_tgt_br" 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:39.319 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.319 Cannot find device "nvmf_tgt_br2" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:39.320 Cannot find device "nvmf_init_br" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:39.320 Cannot find device "nvmf_init_br2" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:39.320 Cannot find device "nvmf_tgt_br" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:39.320 Cannot find device "nvmf_tgt_br2" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:39.320 Cannot find device "nvmf_br" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:39.320 Cannot find device "nvmf_init_if" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:39.320 Cannot find device "nvmf_init_if2" 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:39.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:39.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.320 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.578 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.578 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.578 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.578 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.578 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:39.579 00:20:39.579 --- 10.0.0.3 ping statistics --- 00:20:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.579 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.579 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.579 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:39.579 00:20:39.579 --- 10.0.0.4 ping statistics --- 00:20:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.579 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:39.579 00:20:39.579 --- 10.0.0.1 ping statistics --- 00:20:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.579 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:39.579 00:20:39.579 --- 10.0.0.2 ping statistics --- 00:20:39.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.579 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:39.579 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:39.838 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:39.838 14:36:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:40.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.097 Waiting for block devices as requested 00:20:40.097 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.357 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:40.357 No valid GPT data, bailing 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:40.357 No valid GPT data, bailing 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:40.357 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:40.358 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:40.655 No valid GPT data, bailing 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:40.655 No valid GPT data, bailing 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -a 10.0.0.1 -t tcp -s 4420 00:20:40.655 00:20:40.655 Discovery Log Number of Records 2, Generation counter 2 00:20:40.655 =====Discovery Log Entry 0====== 00:20:40.655 trtype: tcp 00:20:40.655 adrfam: ipv4 00:20:40.655 subtype: current discovery subsystem 00:20:40.655 treq: not specified, sq flow control disable supported 00:20:40.655 portid: 1 00:20:40.655 trsvcid: 4420 00:20:40.655 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:40.655 traddr: 10.0.0.1 00:20:40.655 eflags: none 00:20:40.655 sectype: none 00:20:40.655 =====Discovery Log Entry 1====== 00:20:40.655 trtype: tcp 00:20:40.655 adrfam: ipv4 00:20:40.655 subtype: nvme subsystem 00:20:40.655 treq: not specified, sq flow control disable supported 00:20:40.655 portid: 1 00:20:40.655 trsvcid: 4420 00:20:40.655 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:40.655 traddr: 10.0.0.1 00:20:40.655 eflags: none 00:20:40.655 sectype: none 00:20:40.655 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:40.655 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:40.915 ===================================================== 00:20:40.915 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:40.915 ===================================================== 00:20:40.915 Controller Capabilities/Features 00:20:40.915 ================================ 00:20:40.915 Vendor ID: 0000 00:20:40.915 Subsystem Vendor ID: 0000 00:20:40.915 Serial Number: 21c0b48becf38a4c5abe 00:20:40.915 Model Number: Linux 00:20:40.915 Firmware Version: 6.8.9-20 00:20:40.915 Recommended Arb Burst: 0 00:20:40.915 IEEE OUI Identifier: 00 00 00 00:20:40.915 Multi-path I/O 00:20:40.915 May have multiple subsystem ports: No 00:20:40.915 May have multiple controllers: No 00:20:40.915 Associated with SR-IOV VF: No 00:20:40.915 Max Data Transfer Size: Unlimited 00:20:40.915 Max Number of Namespaces: 0 00:20:40.915 Max Number of I/O Queues: 1024 00:20:40.915 NVMe Specification Version (VS): 1.3 00:20:40.915 NVMe Specification Version (Identify): 1.3 00:20:40.915 Maximum Queue Entries: 1024 00:20:40.915 Contiguous Queues Required: No 00:20:40.915 Arbitration Mechanisms Supported 00:20:40.915 Weighted Round Robin: Not Supported 00:20:40.915 Vendor Specific: Not Supported 00:20:40.915 Reset Timeout: 7500 ms 00:20:40.915 Doorbell Stride: 4 bytes 00:20:40.915 NVM Subsystem Reset: Not Supported 00:20:40.915 Command Sets Supported 00:20:40.915 NVM Command Set: Supported 00:20:40.915 Boot Partition: Not Supported 00:20:40.915 Memory Page Size Minimum: 4096 bytes 00:20:40.915 Memory Page Size Maximum: 4096 bytes 00:20:40.915 Persistent Memory Region: Not Supported 00:20:40.915 Optional Asynchronous Events Supported 00:20:40.915 Namespace Attribute Notices: Not Supported 00:20:40.915 Firmware Activation Notices: Not Supported 00:20:40.915 ANA Change Notices: Not Supported 00:20:40.915 PLE Aggregate Log Change Notices: Not Supported 00:20:40.915 LBA Status Info Alert Notices: Not Supported 00:20:40.915 EGE Aggregate Log Change Notices: Not Supported 00:20:40.915 Normal NVM Subsystem Shutdown event: Not Supported 00:20:40.915 Zone Descriptor Change Notices: Not Supported 00:20:40.915 Discovery Log Change Notices: Supported 00:20:40.915 Controller Attributes 00:20:40.915 128-bit Host Identifier: Not Supported 00:20:40.915 Non-Operational Permissive Mode: Not Supported 00:20:40.915 NVM Sets: Not Supported 00:20:40.915 Read Recovery Levels: Not Supported 00:20:40.915 Endurance Groups: Not Supported 00:20:40.915 Predictable Latency Mode: Not Supported 00:20:40.915 Traffic Based Keep ALive: Not Supported 00:20:40.915 Namespace Granularity: Not Supported 00:20:40.915 SQ Associations: Not Supported 00:20:40.915 UUID List: Not Supported 00:20:40.915 Multi-Domain Subsystem: Not Supported 00:20:40.915 Fixed Capacity Management: Not Supported 00:20:40.915 Variable Capacity Management: Not Supported 00:20:40.915 Delete Endurance Group: Not Supported 00:20:40.915 Delete NVM Set: Not Supported 00:20:40.915 Extended LBA Formats Supported: Not Supported 00:20:40.915 Flexible Data Placement Supported: Not Supported 00:20:40.915 00:20:40.915 Controller Memory Buffer Support 00:20:40.915 ================================ 00:20:40.915 Supported: No 00:20:40.915 00:20:40.915 Persistent Memory Region Support 00:20:40.915 ================================ 00:20:40.915 Supported: No 00:20:40.915 00:20:40.915 Admin Command Set Attributes 00:20:40.915 ============================ 00:20:40.915 Security Send/Receive: Not Supported 00:20:40.915 Format NVM: Not Supported 00:20:40.915 Firmware Activate/Download: Not Supported 00:20:40.915 Namespace Management: Not Supported 00:20:40.915 Device Self-Test: Not Supported 00:20:40.915 Directives: Not Supported 00:20:40.915 NVMe-MI: Not Supported 00:20:40.915 Virtualization Management: Not Supported 00:20:40.915 Doorbell Buffer Config: Not Supported 00:20:40.915 Get LBA Status Capability: Not Supported 00:20:40.915 Command & Feature Lockdown Capability: Not Supported 00:20:40.915 Abort Command Limit: 1 00:20:40.915 Async Event Request Limit: 1 00:20:40.915 Number of Firmware Slots: N/A 00:20:40.915 Firmware Slot 1 Read-Only: N/A 00:20:40.915 Firmware Activation Without Reset: N/A 00:20:40.915 Multiple Update Detection Support: N/A 00:20:40.915 Firmware Update Granularity: No Information Provided 00:20:40.915 Per-Namespace SMART Log: No 00:20:40.915 Asymmetric Namespace Access Log Page: Not Supported 00:20:40.915 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:40.915 Command Effects Log Page: Not Supported 00:20:40.915 Get Log Page Extended Data: Supported 00:20:40.915 Telemetry Log Pages: Not Supported 00:20:40.915 Persistent Event Log Pages: Not Supported 00:20:40.915 Supported Log Pages Log Page: May Support 00:20:40.915 Commands Supported & Effects Log Page: Not Supported 00:20:40.915 Feature Identifiers & Effects Log Page:May Support 00:20:40.915 NVMe-MI Commands & Effects Log Page: May Support 00:20:40.915 Data Area 4 for Telemetry Log: Not Supported 00:20:40.915 Error Log Page Entries Supported: 1 00:20:40.915 Keep Alive: Not Supported 00:20:40.915 00:20:40.915 NVM Command Set Attributes 00:20:40.915 ========================== 00:20:40.915 Submission Queue Entry Size 00:20:40.915 Max: 1 00:20:40.915 Min: 1 00:20:40.915 Completion Queue Entry Size 00:20:40.915 Max: 1 00:20:40.915 Min: 1 00:20:40.915 Number of Namespaces: 0 00:20:40.915 Compare Command: Not Supported 00:20:40.915 Write Uncorrectable Command: Not Supported 00:20:40.915 Dataset Management Command: Not Supported 00:20:40.915 Write Zeroes Command: Not Supported 00:20:40.915 Set Features Save Field: Not Supported 00:20:40.916 Reservations: Not Supported 00:20:40.916 Timestamp: Not Supported 00:20:40.916 Copy: Not Supported 00:20:40.916 Volatile Write Cache: Not Present 00:20:40.916 Atomic Write Unit (Normal): 1 00:20:40.916 Atomic Write Unit (PFail): 1 00:20:40.916 Atomic Compare & Write Unit: 1 00:20:40.916 Fused Compare & Write: Not Supported 00:20:40.916 Scatter-Gather List 00:20:40.916 SGL Command Set: Supported 00:20:40.916 SGL Keyed: Not Supported 00:20:40.916 SGL Bit Bucket Descriptor: Not Supported 00:20:40.916 SGL Metadata Pointer: Not Supported 00:20:40.916 Oversized SGL: Not Supported 00:20:40.916 SGL Metadata Address: Not Supported 00:20:40.916 SGL Offset: Supported 00:20:40.916 Transport SGL Data Block: Not Supported 00:20:40.916 Replay Protected Memory Block: Not Supported 00:20:40.916 00:20:40.916 Firmware Slot Information 00:20:40.916 ========================= 00:20:40.916 Active slot: 0 00:20:40.916 00:20:40.916 00:20:40.916 Error Log 00:20:40.916 ========= 00:20:40.916 00:20:40.916 Active Namespaces 00:20:40.916 ================= 00:20:40.916 Discovery Log Page 00:20:40.916 ================== 00:20:40.916 Generation Counter: 2 00:20:40.916 Number of Records: 2 00:20:40.916 Record Format: 0 00:20:40.916 00:20:40.916 Discovery Log Entry 0 00:20:40.916 ---------------------- 00:20:40.916 Transport Type: 3 (TCP) 00:20:40.916 Address Family: 1 (IPv4) 00:20:40.916 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:40.916 Entry Flags: 00:20:40.916 Duplicate Returned Information: 0 00:20:40.916 Explicit Persistent Connection Support for Discovery: 0 00:20:40.916 Transport Requirements: 00:20:40.916 Secure Channel: Not Specified 00:20:40.916 Port ID: 1 (0x0001) 00:20:40.916 Controller ID: 65535 (0xffff) 00:20:40.916 Admin Max SQ Size: 32 00:20:40.916 Transport Service Identifier: 4420 00:20:40.916 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:40.916 Transport Address: 10.0.0.1 00:20:40.916 Discovery Log Entry 1 00:20:40.916 ---------------------- 00:20:40.916 Transport Type: 3 (TCP) 00:20:40.916 Address Family: 1 (IPv4) 00:20:40.916 Subsystem Type: 2 (NVM Subsystem) 00:20:40.916 Entry Flags: 00:20:40.916 Duplicate Returned Information: 0 00:20:40.916 Explicit Persistent Connection Support for Discovery: 0 00:20:40.916 Transport Requirements: 00:20:40.916 Secure Channel: Not Specified 00:20:40.916 Port ID: 1 (0x0001) 00:20:40.916 Controller ID: 65535 (0xffff) 00:20:40.916 Admin Max SQ Size: 32 00:20:40.916 Transport Service Identifier: 4420 00:20:40.916 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:40.916 Transport Address: 10.0.0.1 00:20:40.916 14:36:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:40.916 get_feature(0x01) failed 00:20:40.916 get_feature(0x02) failed 00:20:40.916 get_feature(0x04) failed 00:20:40.916 ===================================================== 00:20:40.916 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:40.916 ===================================================== 00:20:40.916 Controller Capabilities/Features 00:20:40.916 ================================ 00:20:40.916 Vendor ID: 0000 00:20:40.916 Subsystem Vendor ID: 0000 00:20:40.916 Serial Number: fa2d3624c07d65ebec22 00:20:40.916 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:40.916 Firmware Version: 6.8.9-20 00:20:40.916 Recommended Arb Burst: 6 00:20:40.916 IEEE OUI Identifier: 00 00 00 00:20:40.916 Multi-path I/O 00:20:40.916 May have multiple subsystem ports: Yes 00:20:40.916 May have multiple controllers: Yes 00:20:40.916 Associated with SR-IOV VF: No 00:20:40.916 Max Data Transfer Size: Unlimited 00:20:40.916 Max Number of Namespaces: 1024 00:20:40.916 Max Number of I/O Queues: 128 00:20:40.916 NVMe Specification Version (VS): 1.3 00:20:40.916 NVMe Specification Version (Identify): 1.3 00:20:40.916 Maximum Queue Entries: 1024 00:20:40.916 Contiguous Queues Required: No 00:20:40.916 Arbitration Mechanisms Supported 00:20:40.916 Weighted Round Robin: Not Supported 00:20:40.916 Vendor Specific: Not Supported 00:20:40.916 Reset Timeout: 7500 ms 00:20:40.916 Doorbell Stride: 4 bytes 00:20:40.916 NVM Subsystem Reset: Not Supported 00:20:40.916 Command Sets Supported 00:20:40.916 NVM Command Set: Supported 00:20:40.916 Boot Partition: Not Supported 00:20:40.916 Memory Page Size Minimum: 4096 bytes 00:20:40.916 Memory Page Size Maximum: 4096 bytes 00:20:40.916 Persistent Memory Region: Not Supported 00:20:40.916 Optional Asynchronous Events Supported 00:20:40.916 Namespace Attribute Notices: Supported 00:20:40.916 Firmware Activation Notices: Not Supported 00:20:40.916 ANA Change Notices: Supported 00:20:40.916 PLE Aggregate Log Change Notices: Not Supported 00:20:40.916 LBA Status Info Alert Notices: Not Supported 00:20:40.916 EGE Aggregate Log Change Notices: Not Supported 00:20:40.916 Normal NVM Subsystem Shutdown event: Not Supported 00:20:40.916 Zone Descriptor Change Notices: Not Supported 00:20:40.916 Discovery Log Change Notices: Not Supported 00:20:40.916 Controller Attributes 00:20:40.916 128-bit Host Identifier: Supported 00:20:40.916 Non-Operational Permissive Mode: Not Supported 00:20:40.916 NVM Sets: Not Supported 00:20:40.916 Read Recovery Levels: Not Supported 00:20:40.916 Endurance Groups: Not Supported 00:20:40.916 Predictable Latency Mode: Not Supported 00:20:40.916 Traffic Based Keep ALive: Supported 00:20:40.916 Namespace Granularity: Not Supported 00:20:40.916 SQ Associations: Not Supported 00:20:40.916 UUID List: Not Supported 00:20:40.916 Multi-Domain Subsystem: Not Supported 00:20:40.916 Fixed Capacity Management: Not Supported 00:20:40.916 Variable Capacity Management: Not Supported 00:20:40.916 Delete Endurance Group: Not Supported 00:20:40.916 Delete NVM Set: Not Supported 00:20:40.916 Extended LBA Formats Supported: Not Supported 00:20:40.916 Flexible Data Placement Supported: Not Supported 00:20:40.916 00:20:40.916 Controller Memory Buffer Support 00:20:40.916 ================================ 00:20:40.916 Supported: No 00:20:40.916 00:20:40.916 Persistent Memory Region Support 00:20:40.916 ================================ 00:20:40.916 Supported: No 00:20:40.916 00:20:40.916 Admin Command Set Attributes 00:20:40.916 ============================ 00:20:40.916 Security Send/Receive: Not Supported 00:20:40.916 Format NVM: Not Supported 00:20:40.916 Firmware Activate/Download: Not Supported 00:20:40.916 Namespace Management: Not Supported 00:20:40.916 Device Self-Test: Not Supported 00:20:40.916 Directives: Not Supported 00:20:40.916 NVMe-MI: Not Supported 00:20:40.916 Virtualization Management: Not Supported 00:20:40.916 Doorbell Buffer Config: Not Supported 00:20:40.916 Get LBA Status Capability: Not Supported 00:20:40.916 Command & Feature Lockdown Capability: Not Supported 00:20:40.916 Abort Command Limit: 4 00:20:40.916 Async Event Request Limit: 4 00:20:40.916 Number of Firmware Slots: N/A 00:20:40.916 Firmware Slot 1 Read-Only: N/A 00:20:40.916 Firmware Activation Without Reset: N/A 00:20:40.916 Multiple Update Detection Support: N/A 00:20:40.916 Firmware Update Granularity: No Information Provided 00:20:40.916 Per-Namespace SMART Log: Yes 00:20:40.916 Asymmetric Namespace Access Log Page: Supported 00:20:40.916 ANA Transition Time : 10 sec 00:20:40.916 00:20:40.916 Asymmetric Namespace Access Capabilities 00:20:40.916 ANA Optimized State : Supported 00:20:40.916 ANA Non-Optimized State : Supported 00:20:40.916 ANA Inaccessible State : Supported 00:20:40.916 ANA Persistent Loss State : Supported 00:20:40.916 ANA Change State : Supported 00:20:40.916 ANAGRPID is not changed : No 00:20:40.916 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:40.916 00:20:40.916 ANA Group Identifier Maximum : 128 00:20:40.916 Number of ANA Group Identifiers : 128 00:20:40.916 Max Number of Allowed Namespaces : 1024 00:20:40.916 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:40.916 Command Effects Log Page: Supported 00:20:40.916 Get Log Page Extended Data: Supported 00:20:40.916 Telemetry Log Pages: Not Supported 00:20:40.916 Persistent Event Log Pages: Not Supported 00:20:40.916 Supported Log Pages Log Page: May Support 00:20:40.916 Commands Supported & Effects Log Page: Not Supported 00:20:40.916 Feature Identifiers & Effects Log Page:May Support 00:20:40.916 NVMe-MI Commands & Effects Log Page: May Support 00:20:40.916 Data Area 4 for Telemetry Log: Not Supported 00:20:40.916 Error Log Page Entries Supported: 128 00:20:40.916 Keep Alive: Supported 00:20:40.916 Keep Alive Granularity: 1000 ms 00:20:40.916 00:20:40.916 NVM Command Set Attributes 00:20:40.916 ========================== 00:20:40.916 Submission Queue Entry Size 00:20:40.916 Max: 64 00:20:40.916 Min: 64 00:20:40.916 Completion Queue Entry Size 00:20:40.916 Max: 16 00:20:40.916 Min: 16 00:20:40.916 Number of Namespaces: 1024 00:20:40.916 Compare Command: Not Supported 00:20:40.916 Write Uncorrectable Command: Not Supported 00:20:40.916 Dataset Management Command: Supported 00:20:40.916 Write Zeroes Command: Supported 00:20:40.916 Set Features Save Field: Not Supported 00:20:40.917 Reservations: Not Supported 00:20:40.917 Timestamp: Not Supported 00:20:40.917 Copy: Not Supported 00:20:40.917 Volatile Write Cache: Present 00:20:40.917 Atomic Write Unit (Normal): 1 00:20:40.917 Atomic Write Unit (PFail): 1 00:20:40.917 Atomic Compare & Write Unit: 1 00:20:40.917 Fused Compare & Write: Not Supported 00:20:40.917 Scatter-Gather List 00:20:40.917 SGL Command Set: Supported 00:20:40.917 SGL Keyed: Not Supported 00:20:40.917 SGL Bit Bucket Descriptor: Not Supported 00:20:40.917 SGL Metadata Pointer: Not Supported 00:20:40.917 Oversized SGL: Not Supported 00:20:40.917 SGL Metadata Address: Not Supported 00:20:40.917 SGL Offset: Supported 00:20:40.917 Transport SGL Data Block: Not Supported 00:20:40.917 Replay Protected Memory Block: Not Supported 00:20:40.917 00:20:40.917 Firmware Slot Information 00:20:40.917 ========================= 00:20:40.917 Active slot: 0 00:20:40.917 00:20:40.917 Asymmetric Namespace Access 00:20:40.917 =========================== 00:20:40.917 Change Count : 0 00:20:40.917 Number of ANA Group Descriptors : 1 00:20:40.917 ANA Group Descriptor : 0 00:20:40.917 ANA Group ID : 1 00:20:40.917 Number of NSID Values : 1 00:20:40.917 Change Count : 0 00:20:40.917 ANA State : 1 00:20:40.917 Namespace Identifier : 1 00:20:40.917 00:20:40.917 Commands Supported and Effects 00:20:40.917 ============================== 00:20:40.917 Admin Commands 00:20:40.917 -------------- 00:20:40.917 Get Log Page (02h): Supported 00:20:40.917 Identify (06h): Supported 00:20:40.917 Abort (08h): Supported 00:20:40.917 Set Features (09h): Supported 00:20:40.917 Get Features (0Ah): Supported 00:20:40.917 Asynchronous Event Request (0Ch): Supported 00:20:40.917 Keep Alive (18h): Supported 00:20:40.917 I/O Commands 00:20:40.917 ------------ 00:20:40.917 Flush (00h): Supported 00:20:40.917 Write (01h): Supported LBA-Change 00:20:40.917 Read (02h): Supported 00:20:40.917 Write Zeroes (08h): Supported LBA-Change 00:20:40.917 Dataset Management (09h): Supported 00:20:40.917 00:20:40.917 Error Log 00:20:40.917 ========= 00:20:40.917 Entry: 0 00:20:40.917 Error Count: 0x3 00:20:40.917 Submission Queue Id: 0x0 00:20:40.917 Command Id: 0x5 00:20:40.917 Phase Bit: 0 00:20:40.917 Status Code: 0x2 00:20:40.917 Status Code Type: 0x0 00:20:40.917 Do Not Retry: 1 00:20:41.175 Error Location: 0x28 00:20:41.175 LBA: 0x0 00:20:41.175 Namespace: 0x0 00:20:41.175 Vendor Log Page: 0x0 00:20:41.175 ----------- 00:20:41.175 Entry: 1 00:20:41.176 Error Count: 0x2 00:20:41.176 Submission Queue Id: 0x0 00:20:41.176 Command Id: 0x5 00:20:41.176 Phase Bit: 0 00:20:41.176 Status Code: 0x2 00:20:41.176 Status Code Type: 0x0 00:20:41.176 Do Not Retry: 1 00:20:41.176 Error Location: 0x28 00:20:41.176 LBA: 0x0 00:20:41.176 Namespace: 0x0 00:20:41.176 Vendor Log Page: 0x0 00:20:41.176 ----------- 00:20:41.176 Entry: 2 00:20:41.176 Error Count: 0x1 00:20:41.176 Submission Queue Id: 0x0 00:20:41.176 Command Id: 0x4 00:20:41.176 Phase Bit: 0 00:20:41.176 Status Code: 0x2 00:20:41.176 Status Code Type: 0x0 00:20:41.176 Do Not Retry: 1 00:20:41.176 Error Location: 0x28 00:20:41.176 LBA: 0x0 00:20:41.176 Namespace: 0x0 00:20:41.176 Vendor Log Page: 0x0 00:20:41.176 00:20:41.176 Number of Queues 00:20:41.176 ================ 00:20:41.176 Number of I/O Submission Queues: 128 00:20:41.176 Number of I/O Completion Queues: 128 00:20:41.176 00:20:41.176 ZNS Specific Controller Data 00:20:41.176 ============================ 00:20:41.176 Zone Append Size Limit: 0 00:20:41.176 00:20:41.176 00:20:41.176 Active Namespaces 00:20:41.176 ================= 00:20:41.176 get_feature(0x05) failed 00:20:41.176 Namespace ID:1 00:20:41.176 Command Set Identifier: NVM (00h) 00:20:41.176 Deallocate: Supported 00:20:41.176 Deallocated/Unwritten Error: Not Supported 00:20:41.176 Deallocated Read Value: Unknown 00:20:41.176 Deallocate in Write Zeroes: Not Supported 00:20:41.176 Deallocated Guard Field: 0xFFFF 00:20:41.176 Flush: Supported 00:20:41.176 Reservation: Not Supported 00:20:41.176 Namespace Sharing Capabilities: Multiple Controllers 00:20:41.176 Size (in LBAs): 1310720 (5GiB) 00:20:41.176 Capacity (in LBAs): 1310720 (5GiB) 00:20:41.176 Utilization (in LBAs): 1310720 (5GiB) 00:20:41.176 UUID: 8bf9c572-10dd-4874-aa58-98078bc79805 00:20:41.176 Thin Provisioning: Not Supported 00:20:41.176 Per-NS Atomic Units: Yes 00:20:41.176 Atomic Boundary Size (Normal): 0 00:20:41.176 Atomic Boundary Size (PFail): 0 00:20:41.176 Atomic Boundary Offset: 0 00:20:41.176 NGUID/EUI64 Never Reused: No 00:20:41.176 ANA group ID: 1 00:20:41.176 Namespace Write Protected: No 00:20:41.176 Number of LBA Formats: 1 00:20:41.176 Current LBA Format: LBA Format #00 00:20:41.176 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:41.176 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.176 rmmod nvme_tcp 00:20:41.176 rmmod nvme_fabrics 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:41.176 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:41.435 14:36:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:42.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.371 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:42.371 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:42.371 ************************************ 00:20:42.371 END TEST nvmf_identify_kernel_target 00:20:42.371 ************************************ 00:20:42.371 00:20:42.371 real 0m3.346s 00:20:42.371 user 0m1.219s 00:20:42.371 sys 0m1.472s 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.371 ************************************ 00:20:42.371 START TEST nvmf_auth_host 00:20:42.371 ************************************ 00:20:42.371 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:42.371 * Looking for test storage... 00:20:42.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.630 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.631 --rc genhtml_branch_coverage=1 00:20:42.631 --rc genhtml_function_coverage=1 00:20:42.631 --rc genhtml_legend=1 00:20:42.631 --rc geninfo_all_blocks=1 00:20:42.631 --rc geninfo_unexecuted_blocks=1 00:20:42.631 00:20:42.631 ' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.631 --rc genhtml_branch_coverage=1 00:20:42.631 --rc genhtml_function_coverage=1 00:20:42.631 --rc genhtml_legend=1 00:20:42.631 --rc geninfo_all_blocks=1 00:20:42.631 --rc geninfo_unexecuted_blocks=1 00:20:42.631 00:20:42.631 ' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.631 --rc genhtml_branch_coverage=1 00:20:42.631 --rc genhtml_function_coverage=1 00:20:42.631 --rc genhtml_legend=1 00:20:42.631 --rc geninfo_all_blocks=1 00:20:42.631 --rc geninfo_unexecuted_blocks=1 00:20:42.631 00:20:42.631 ' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.631 --rc genhtml_branch_coverage=1 00:20:42.631 --rc genhtml_function_coverage=1 00:20:42.631 --rc genhtml_legend=1 00:20:42.631 --rc geninfo_all_blocks=1 00:20:42.631 --rc geninfo_unexecuted_blocks=1 00:20:42.631 00:20:42.631 ' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.631 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:42.631 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:42.632 Cannot find device "nvmf_init_br" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:42.632 Cannot find device "nvmf_init_br2" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:42.632 Cannot find device "nvmf_tgt_br" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.632 Cannot find device "nvmf_tgt_br2" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:42.632 Cannot find device "nvmf_init_br" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:42.632 Cannot find device "nvmf_init_br2" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:42.632 Cannot find device "nvmf_tgt_br" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:42.632 Cannot find device "nvmf_tgt_br2" 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:42.632 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:42.632 Cannot find device "nvmf_br" 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:42.891 Cannot find device "nvmf_init_if" 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:42.891 Cannot find device "nvmf_init_if2" 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:42.891 14:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:42.891 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.891 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:42.891 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:42.892 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:43.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:20:43.150 00:20:43.150 --- 10.0.0.3 ping statistics --- 00:20:43.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.150 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:43.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:43.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:43.150 00:20:43.150 --- 10.0.0.4 ping statistics --- 00:20:43.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.150 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:43.150 00:20:43.150 --- 10.0.0.1 ping statistics --- 00:20:43.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.150 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:43.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:43.150 00:20:43.150 --- 10.0.0.2 ping statistics --- 00:20:43.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.150 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.150 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=94667 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 94667 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94667 ']' 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.151 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=74dad6c036fdec3b16804a71d05a8b79 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5R5 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 74dad6c036fdec3b16804a71d05a8b79 0 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 74dad6c036fdec3b16804a71d05a8b79 0 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=74dad6c036fdec3b16804a71d05a8b79 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5R5 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5R5 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5R5 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e360366121b382b2c7f91f5aff2528af5bb1d05665615b04ddf2bcc5d655689c 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:43.409 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.a3H 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e360366121b382b2c7f91f5aff2528af5bb1d05665615b04ddf2bcc5d655689c 3 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e360366121b382b2c7f91f5aff2528af5bb1d05665615b04ddf2bcc5d655689c 3 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e360366121b382b2c7f91f5aff2528af5bb1d05665615b04ddf2bcc5d655689c 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:43.410 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.a3H 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.a3H 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.a3H 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4133abd333b51d7811dc8e8b43bc214406a74370bb733783 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ixC 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4133abd333b51d7811dc8e8b43bc214406a74370bb733783 0 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4133abd333b51d7811dc8e8b43bc214406a74370bb733783 0 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4133abd333b51d7811dc8e8b43bc214406a74370bb733783 00:20:43.668 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ixC 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ixC 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ixC 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=304ec5bd68080def2095f78cbac39bb5397309fa8282a758 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2gR 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 304ec5bd68080def2095f78cbac39bb5397309fa8282a758 2 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 304ec5bd68080def2095f78cbac39bb5397309fa8282a758 2 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=304ec5bd68080def2095f78cbac39bb5397309fa8282a758 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2gR 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2gR 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2gR 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c0426aad9ee21fdcf44beb053d43edd8 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.8sv 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c0426aad9ee21fdcf44beb053d43edd8 1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c0426aad9ee21fdcf44beb053d43edd8 1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c0426aad9ee21fdcf44beb053d43edd8 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.8sv 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.8sv 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8sv 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1978d5219dbfadbfa0d23fa48e5cd0e6 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WEv 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1978d5219dbfadbfa0d23fa48e5cd0e6 1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1978d5219dbfadbfa0d23fa48e5cd0e6 1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1978d5219dbfadbfa0d23fa48e5cd0e6 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:43.669 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WEv 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WEv 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WEv 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c0605748174bb1438f7b68b3ab949ae23eb9edd9ee59eb9f 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.EoW 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c0605748174bb1438f7b68b3ab949ae23eb9edd9ee59eb9f 2 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c0605748174bb1438f7b68b3ab949ae23eb9edd9ee59eb9f 2 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c0605748174bb1438f7b68b3ab949ae23eb9edd9ee59eb9f 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.EoW 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.EoW 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.EoW 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=26a7a43fe14aa2fe418635b5d8b0ad45 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GBs 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 26a7a43fe14aa2fe418635b5d8b0ad45 0 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 26a7a43fe14aa2fe418635b5d8b0ad45 0 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=26a7a43fe14aa2fe418635b5d8b0ad45 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:43.928 14:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GBs 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GBs 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GBs 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=981e9ee4ee11412dd9da8c58dc5f838270d6e43f56f1d316fc0c40e135fa1e6c 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7CX 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 981e9ee4ee11412dd9da8c58dc5f838270d6e43f56f1d316fc0c40e135fa1e6c 3 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 981e9ee4ee11412dd9da8c58dc5f838270d6e43f56f1d316fc0c40e135fa1e6c 3 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=981e9ee4ee11412dd9da8c58dc5f838270d6e43f56f1d316fc0c40e135fa1e6c 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7CX 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7CX 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7CX 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 94667 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 94667 ']' 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.928 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5R5 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.a3H ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.a3H 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ixC 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2gR ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2gR 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8sv 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WEv ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WEv 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.EoW 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GBs ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GBs 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7CX 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:44.495 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:44.496 14:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:44.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:44.754 Waiting for block devices as requested 00:20:44.754 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:45.579 No valid GPT data, bailing 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:45.579 No valid GPT data, bailing 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:45.579 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:45.837 No valid GPT data, bailing 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:45.837 No valid GPT data, bailing 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -a 10.0.0.1 -t tcp -s 4420 00:20:45.837 00:20:45.837 Discovery Log Number of Records 2, Generation counter 2 00:20:45.837 =====Discovery Log Entry 0====== 00:20:45.837 trtype: tcp 00:20:45.837 adrfam: ipv4 00:20:45.837 subtype: current discovery subsystem 00:20:45.837 treq: not specified, sq flow control disable supported 00:20:45.837 portid: 1 00:20:45.837 trsvcid: 4420 00:20:45.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:45.837 traddr: 10.0.0.1 00:20:45.837 eflags: none 00:20:45.837 sectype: none 00:20:45.837 =====Discovery Log Entry 1====== 00:20:45.837 trtype: tcp 00:20:45.837 adrfam: ipv4 00:20:45.837 subtype: nvme subsystem 00:20:45.837 treq: not specified, sq flow control disable supported 00:20:45.837 portid: 1 00:20:45.837 trsvcid: 4420 00:20:45.837 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:45.837 traddr: 10.0.0.1 00:20:45.837 eflags: none 00:20:45.837 sectype: none 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.837 14:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 nvme0n1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.096 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 nvme0n1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.355 nvme0n1 00:20:46.355 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 nvme0n1 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.614 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.873 nvme0n1 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.873 14:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.131 nvme0n1 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.131 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.390 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 nvme0n1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 nvme0n1 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.648 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 nvme0n1 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.908 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.167 nvme0n1 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.167 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.425 nvme0n1 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.425 14:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:48.991 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.992 nvme0n1 00:20:48.992 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:49.250 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.251 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.509 nvme0n1 00:20:49.509 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.509 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.509 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.509 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.509 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.510 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.768 nvme0n1 00:20:49.768 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.769 nvme0n1 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.769 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.027 14:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.027 nvme0n1 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.027 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.285 14:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:51.658 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.659 14:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.917 nvme0n1 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.917 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.175 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.176 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.434 nvme0n1 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.434 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.435 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.693 nvme0n1 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.693 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.952 14:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.210 nvme0n1 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.210 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.468 nvme0n1 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.468 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.727 14:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.293 nvme0n1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.293 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.859 nvme0n1 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.859 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.860 14:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.425 nvme0n1 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.425 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.992 nvme0n1 00:20:55.992 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.992 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.992 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.992 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.992 14:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.992 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.559 nvme0n1 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.559 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.560 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.818 nvme0n1 00:20:56.818 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.818 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.818 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 nvme0n1 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.819 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.077 nvme0n1 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.077 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.078 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 nvme0n1 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 nvme0n1 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.336 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.595 nvme0n1 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.595 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.596 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 nvme0n1 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.855 14:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 nvme0n1 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.114 nvme0n1 00:20:58.114 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.373 nvme0n1 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.373 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.632 nvme0n1 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.632 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.893 14:36:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 nvme0n1 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.893 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 nvme0n1 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.442 nvme0n1 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.442 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.701 nvme0n1 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.701 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.960 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.961 14:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.219 nvme0n1 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.219 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.220 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.478 nvme0n1 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.478 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.737 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.738 14:36:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.996 nvme0n1 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.996 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 nvme0n1 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.254 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.513 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.772 nvme0n1 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.772 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.773 14:36:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 nvme0n1 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.339 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.340 14:36:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.907 nvme0n1 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:02.907 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.908 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.473 nvme0n1 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.473 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.731 14:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.298 nvme0n1 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.298 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.299 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.865 nvme0n1 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.865 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.866 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.866 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.866 14:36:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.866 nvme0n1 00:21:04.866 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.866 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.866 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.866 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.866 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.124 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.125 nvme0n1 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.125 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 nvme0n1 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:05.383 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.384 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.642 nvme0n1 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.642 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 nvme0n1 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.643 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.902 nvme0n1 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.902 14:36:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.902 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.903 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.161 nvme0n1 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:06.161 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.162 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 nvme0n1 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.421 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 nvme0n1 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 nvme0n1 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.680 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.939 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.940 14:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.940 nvme0n1 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.940 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.198 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.199 nvme0n1 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.199 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.457 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.457 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.457 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.457 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.458 nvme0n1 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.458 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 nvme0n1 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.717 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:07.975 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.976 14:36:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.976 nvme0n1 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.976 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.234 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.234 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:08.234 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.234 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.235 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.493 nvme0n1 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:08.493 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.494 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.751 nvme0n1 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.751 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.008 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.009 14:37:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.267 nvme0n1 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.267 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.525 nvme0n1 00:21:09.525 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.525 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.525 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.525 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.525 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.783 14:37:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.042 nvme0n1 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzRkYWQ2YzAzNmZkZWMzYjE2ODA0YTcxZDA1YThiNzk3cqXv: 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTM2MDM2NjEyMWIzODJiMmM3ZjkxZjVhZmYyNTI4YWY1YmIxZDA1NjY1NjE1YjA0ZGRmMmJjYzVkNjU1Njg5Y4I367o=: 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.042 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 nvme0n1 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.609 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.868 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.868 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.868 14:37:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 nvme0n1 00:21:11.126 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.126 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.126 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.126 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.126 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.384 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.385 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 nvme0n1 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzA2MDU3NDgxNzRiYjE0MzhmN2I2OGIzYWI5NDlhZTIzZWI5ZWRkOWVlNTllYjlmpRBA8Q==: 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjZhN2E0M2ZlMTRhYTJmZTQxODYzNWI1ZDhiMGFkNDWMY79i: 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.952 14:37:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.952 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.519 nvme0n1 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTgxZTllZTRlZTExNDEyZGQ5ZGE4YzU4ZGM1ZjgzODI3MGQ2ZTQzZjU2ZjFkMzE2ZmMwYzQwZTEzNWZhMWU2Y1EJCbc=: 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.519 14:37:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 nvme0n1 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.086 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.087 request: 00:21:13.087 { 00:21:13.087 "name": "nvme0", 00:21:13.087 "trtype": "tcp", 00:21:13.087 "traddr": "10.0.0.1", 00:21:13.087 "adrfam": "ipv4", 00:21:13.087 "trsvcid": "4420", 00:21:13.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:13.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:13.087 "prchk_reftag": false, 00:21:13.087 "prchk_guard": false, 00:21:13.087 "hdgst": false, 00:21:13.087 "ddgst": false, 00:21:13.087 "allow_unrecognized_csi": false, 00:21:13.087 "method": "bdev_nvme_attach_controller", 00:21:13.087 "req_id": 1 00:21:13.087 } 00:21:13.087 Got JSON-RPC error response 00:21:13.087 response: 00:21:13.087 { 00:21:13.087 "code": -5, 00:21:13.087 "message": "Input/output error" 00:21:13.087 } 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.087 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.346 request: 00:21:13.346 { 00:21:13.346 "name": "nvme0", 00:21:13.346 "trtype": "tcp", 00:21:13.346 "traddr": "10.0.0.1", 00:21:13.346 "adrfam": "ipv4", 00:21:13.346 "trsvcid": "4420", 00:21:13.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:13.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:13.346 "prchk_reftag": false, 00:21:13.346 "prchk_guard": false, 00:21:13.346 "hdgst": false, 00:21:13.346 "ddgst": false, 00:21:13.346 "dhchap_key": "key2", 00:21:13.346 "allow_unrecognized_csi": false, 00:21:13.346 "method": "bdev_nvme_attach_controller", 00:21:13.346 "req_id": 1 00:21:13.346 } 00:21:13.346 Got JSON-RPC error response 00:21:13.346 response: 00:21:13.346 { 00:21:13.346 "code": -5, 00:21:13.346 "message": "Input/output error" 00:21:13.346 } 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.346 request: 00:21:13.346 { 00:21:13.346 "name": "nvme0", 00:21:13.346 "trtype": "tcp", 00:21:13.346 "traddr": "10.0.0.1", 00:21:13.346 "adrfam": "ipv4", 00:21:13.346 "trsvcid": "4420", 00:21:13.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:13.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:13.346 "prchk_reftag": false, 00:21:13.346 "prchk_guard": false, 00:21:13.346 "hdgst": false, 00:21:13.346 "ddgst": false, 00:21:13.346 "dhchap_key": "key1", 00:21:13.346 "dhchap_ctrlr_key": "ckey2", 00:21:13.346 "allow_unrecognized_csi": false, 00:21:13.346 "method": "bdev_nvme_attach_controller", 00:21:13.346 "req_id": 1 00:21:13.346 } 00:21:13.346 Got JSON-RPC error response 00:21:13.346 response: 00:21:13.346 { 00:21:13.346 "code": -5, 00:21:13.346 "message": "Input/output error" 00:21:13.346 } 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.346 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.604 nvme0n1 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.604 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 request: 00:21:13.605 { 00:21:13.605 "name": "nvme0", 00:21:13.605 "dhchap_key": "key1", 00:21:13.605 "dhchap_ctrlr_key": "ckey2", 00:21:13.605 "method": "bdev_nvme_set_keys", 00:21:13.605 "req_id": 1 00:21:13.605 } 00:21:13.605 Got JSON-RPC error response 00:21:13.605 response: 00:21:13.605 { 00:21:13.605 "code": -13, 00:21:13.605 "message": "Permission denied" 00:21:13.605 } 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:13.605 14:37:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:14.540 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.540 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:14.540 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.540 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.540 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDEzM2FiZDMzM2I1MWQ3ODExZGM4ZThiNDNiYzIxNDQwNmE3NDM3MGJiNzMzNzgzMEwtMg==: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzA0ZWM1YmQ2ODA4MGRlZjIwOTVmNzhjYmFjMzliYjUzOTczMDlmYTgyODJhNzU42fmvIw==: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.800 nvme0n1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA0MjZhYWQ5ZWUyMWZkY2Y0NGJlYjA1M2Q0M2VkZDhT/wR9: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: ]] 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTk3OGQ1MjE5ZGJmYWRiZmEwZDIzZmE0OGU1Y2QwZTZVW5nn: 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.800 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.800 request: 00:21:14.800 { 00:21:14.800 "name": "nvme0", 00:21:14.800 "dhchap_key": "key2", 00:21:14.800 "dhchap_ctrlr_key": "ckey1", 00:21:14.800 "method": "bdev_nvme_set_keys", 00:21:14.800 "req_id": 1 00:21:14.801 } 00:21:14.801 Got JSON-RPC error response 00:21:14.801 response: 00:21:14.801 { 00:21:14.801 "code": -13, 00:21:14.801 "message": "Permission denied" 00:21:14.801 } 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:14.801 14:37:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:16.178 14:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.178 14:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:16.178 14:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.178 14:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.178 14:37:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.178 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.179 rmmod nvme_tcp 00:21:16.179 rmmod nvme_fabrics 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 94667 ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 94667 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 94667 ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 94667 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94667 00:21:16.179 killing process with pid 94667 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94667' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 94667 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 94667 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:16.179 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:16.438 14:37:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:17.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.264 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:17.264 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:17.264 14:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5R5 /tmp/spdk.key-null.ixC /tmp/spdk.key-sha256.8sv /tmp/spdk.key-sha384.EoW /tmp/spdk.key-sha512.7CX /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:17.264 14:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:17.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.831 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:17.831 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:17.831 00:21:17.831 real 0m35.364s 00:21:17.831 user 0m32.586s 00:21:17.831 sys 0m3.852s 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.831 ************************************ 00:21:17.831 END TEST nvmf_auth_host 00:21:17.831 ************************************ 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.831 ************************************ 00:21:17.831 START TEST nvmf_digest 00:21:17.831 ************************************ 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:17.831 * Looking for test storage... 00:21:17.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:17.831 14:37:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.831 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.831 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:18.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.108 --rc genhtml_branch_coverage=1 00:21:18.108 --rc genhtml_function_coverage=1 00:21:18.108 --rc genhtml_legend=1 00:21:18.108 --rc geninfo_all_blocks=1 00:21:18.108 --rc geninfo_unexecuted_blocks=1 00:21:18.108 00:21:18.108 ' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:18.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.108 --rc genhtml_branch_coverage=1 00:21:18.108 --rc genhtml_function_coverage=1 00:21:18.108 --rc genhtml_legend=1 00:21:18.108 --rc geninfo_all_blocks=1 00:21:18.108 --rc geninfo_unexecuted_blocks=1 00:21:18.108 00:21:18.108 ' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:18.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.108 --rc genhtml_branch_coverage=1 00:21:18.108 --rc genhtml_function_coverage=1 00:21:18.108 --rc genhtml_legend=1 00:21:18.108 --rc geninfo_all_blocks=1 00:21:18.108 --rc geninfo_unexecuted_blocks=1 00:21:18.108 00:21:18.108 ' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:18.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.108 --rc genhtml_branch_coverage=1 00:21:18.108 --rc genhtml_function_coverage=1 00:21:18.108 --rc genhtml_legend=1 00:21:18.108 --rc geninfo_all_blocks=1 00:21:18.108 --rc geninfo_unexecuted_blocks=1 00:21:18.108 00:21:18.108 ' 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.108 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.109 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:18.109 Cannot find device "nvmf_init_br" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:18.109 Cannot find device "nvmf_init_br2" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:18.109 Cannot find device "nvmf_tgt_br" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.109 Cannot find device "nvmf_tgt_br2" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:18.109 Cannot find device "nvmf_init_br" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:18.109 Cannot find device "nvmf_init_br2" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:18.109 Cannot find device "nvmf_tgt_br" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:18.109 Cannot find device "nvmf_tgt_br2" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:18.109 Cannot find device "nvmf_br" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:18.109 Cannot find device "nvmf_init_if" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:18.109 Cannot find device "nvmf_init_if2" 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:18.109 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:18.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:18.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:21:18.378 00:21:18.378 --- 10.0.0.3 ping statistics --- 00:21:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.378 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:18.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:18.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:21:18.378 00:21:18.378 --- 10.0.0.4 ping statistics --- 00:21:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.378 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:18.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:18.378 00:21:18.378 --- 10.0.0.1 ping statistics --- 00:21:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.378 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:18.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:21:18.378 00:21:18.378 --- 10.0.0.2 ping statistics --- 00:21:18.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.378 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.378 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 ************************************ 00:21:18.379 START TEST nvmf_digest_clean 00:21:18.379 ************************************ 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=96296 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 96296 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96296 ']' 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.379 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.637 [2024-12-16 14:37:10.610895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:18.637 [2024-12-16 14:37:10.610994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.637 [2024-12-16 14:37:10.764722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.637 [2024-12-16 14:37:10.787918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.637 [2024-12-16 14:37:10.787979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.637 [2024-12-16 14:37:10.787993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.637 [2024-12-16 14:37:10.788003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.637 [2024-12-16 14:37:10.788011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.637 [2024-12-16 14:37:10.788367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.896 14:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.896 [2024-12-16 14:37:10.954950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.896 null0 00:21:18.896 [2024-12-16 14:37:10.991514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.896 [2024-12-16 14:37:11.015647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96315 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96315 /var/tmp/bperf.sock 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96315 ']' 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.896 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.154 [2024-12-16 14:37:11.100785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:19.154 [2024-12-16 14:37:11.100918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96315 ] 00:21:19.154 [2024-12-16 14:37:11.269100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.154 [2024-12-16 14:37:11.293268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.412 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.412 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:19.412 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:19.412 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:19.412 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:19.670 [2024-12-16 14:37:11.624301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.670 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.670 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.928 nvme0n1 00:21:19.928 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:19.928 14:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.928 Running I/O for 2 seconds... 00:21:22.238 17526.00 IOPS, 68.46 MiB/s [2024-12-16T14:37:14.439Z] 17653.00 IOPS, 68.96 MiB/s 00:21:22.239 Latency(us) 00:21:22.239 [2024-12-16T14:37:14.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.239 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:22.239 nvme0n1 : 2.01 17669.04 69.02 0.00 0.00 7239.43 6642.97 20018.27 00:21:22.239 [2024-12-16T14:37:14.439Z] =================================================================================================================== 00:21:22.239 [2024-12-16T14:37:14.439Z] Total : 17669.04 69.02 0.00 0.00 7239.43 6642.97 20018.27 00:21:22.239 { 00:21:22.239 "results": [ 00:21:22.239 { 00:21:22.239 "job": "nvme0n1", 00:21:22.239 "core_mask": "0x2", 00:21:22.239 "workload": "randread", 00:21:22.239 "status": "finished", 00:21:22.239 "queue_depth": 128, 00:21:22.239 "io_size": 4096, 00:21:22.239 "runtime": 2.005429, 00:21:22.239 "iops": 17669.037397983175, 00:21:22.239 "mibps": 69.01967733587178, 00:21:22.239 "io_failed": 0, 00:21:22.239 "io_timeout": 0, 00:21:22.239 "avg_latency_us": 7239.434255132461, 00:21:22.239 "min_latency_us": 6642.967272727273, 00:21:22.239 "max_latency_us": 20018.269090909092 00:21:22.239 } 00:21:22.239 ], 00:21:22.239 "core_count": 1 00:21:22.239 } 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:22.239 | select(.opcode=="crc32c") 00:21:22.239 | "\(.module_name) \(.executed)"' 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96315 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96315 ']' 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96315 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96315 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:22.239 killing process with pid 96315 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96315' 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96315 00:21:22.239 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.239 00:21:22.239 Latency(us) 00:21:22.239 [2024-12-16T14:37:14.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.239 [2024-12-16T14:37:14.439Z] =================================================================================================================== 00:21:22.239 [2024-12-16T14:37:14.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.239 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96315 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96362 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96362 /var/tmp/bperf.sock 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96362 ']' 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.498 14:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:22.498 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.498 Zero copy mechanism will not be used. 00:21:22.498 [2024-12-16 14:37:14.570015] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:22.498 [2024-12-16 14:37:14.570123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96362 ] 00:21:22.756 [2024-12-16 14:37:14.715476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.756 [2024-12-16 14:37:14.733853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.323 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.323 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:23.323 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:23.323 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:23.323 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:23.581 [2024-12-16 14:37:15.720110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.581 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.581 14:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.839 nvme0n1 00:21:24.097 14:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.097 14:37:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.097 Zero copy mechanism will not be used. 00:21:24.097 Running I/O for 2 seconds... 00:21:26.408 8960.00 IOPS, 1120.00 MiB/s [2024-12-16T14:37:18.608Z] 8976.00 IOPS, 1122.00 MiB/s 00:21:26.408 Latency(us) 00:21:26.408 [2024-12-16T14:37:18.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.408 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:26.408 nvme0n1 : 2.00 8974.40 1121.80 0.00 0.00 1780.03 1578.82 6642.97 00:21:26.408 [2024-12-16T14:37:18.608Z] =================================================================================================================== 00:21:26.408 [2024-12-16T14:37:18.608Z] Total : 8974.40 1121.80 0.00 0.00 1780.03 1578.82 6642.97 00:21:26.408 { 00:21:26.408 "results": [ 00:21:26.408 { 00:21:26.408 "job": "nvme0n1", 00:21:26.408 "core_mask": "0x2", 00:21:26.408 "workload": "randread", 00:21:26.408 "status": "finished", 00:21:26.408 "queue_depth": 16, 00:21:26.408 "io_size": 131072, 00:21:26.408 "runtime": 2.002139, 00:21:26.408 "iops": 8974.401877192342, 00:21:26.408 "mibps": 1121.8002346490427, 00:21:26.408 "io_failed": 0, 00:21:26.408 "io_timeout": 0, 00:21:26.408 "avg_latency_us": 1780.0314352788796, 00:21:26.408 "min_latency_us": 1578.8218181818181, 00:21:26.408 "max_latency_us": 6642.967272727273 00:21:26.408 } 00:21:26.408 ], 00:21:26.408 "core_count": 1 00:21:26.408 } 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.408 | select(.opcode=="crc32c") 00:21:26.408 | "\(.module_name) \(.executed)"' 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96362 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96362 ']' 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96362 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96362 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.408 killing process with pid 96362 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96362' 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96362 00:21:26.408 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.408 00:21:26.408 Latency(us) 00:21:26.408 [2024-12-16T14:37:18.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.408 [2024-12-16T14:37:18.608Z] =================================================================================================================== 00:21:26.408 [2024-12-16T14:37:18.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96362 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:26.408 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96421 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96421 /var/tmp/bperf.sock 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96421 ']' 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.666 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.666 [2024-12-16 14:37:18.645851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:26.666 [2024-12-16 14:37:18.645937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96421 ] 00:21:26.666 [2024-12-16 14:37:18.783874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.666 [2024-12-16 14:37:18.802195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.925 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.925 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:26.925 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:26.925 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:26.925 14:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.183 [2024-12-16 14:37:19.204027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.183 14:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.183 14:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.441 nvme0n1 00:21:27.441 14:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:27.442 14:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.700 Running I/O for 2 seconds... 00:21:29.568 19053.00 IOPS, 74.43 MiB/s [2024-12-16T14:37:21.768Z] 19178.50 IOPS, 74.92 MiB/s 00:21:29.568 Latency(us) 00:21:29.568 [2024-12-16T14:37:21.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.568 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.568 nvme0n1 : 2.00 19214.41 75.06 0.00 0.00 6655.60 4408.79 14537.08 00:21:29.568 [2024-12-16T14:37:21.768Z] =================================================================================================================== 00:21:29.568 [2024-12-16T14:37:21.768Z] Total : 19214.41 75.06 0.00 0.00 6655.60 4408.79 14537.08 00:21:29.568 { 00:21:29.568 "results": [ 00:21:29.568 { 00:21:29.568 "job": "nvme0n1", 00:21:29.568 "core_mask": "0x2", 00:21:29.568 "workload": "randwrite", 00:21:29.568 "status": "finished", 00:21:29.568 "queue_depth": 128, 00:21:29.568 "io_size": 4096, 00:21:29.568 "runtime": 2.002924, 00:21:29.568 "iops": 19214.408534722235, 00:21:29.568 "mibps": 75.05628333875873, 00:21:29.568 "io_failed": 0, 00:21:29.568 "io_timeout": 0, 00:21:29.568 "avg_latency_us": 6655.596190818146, 00:21:29.568 "min_latency_us": 4408.785454545455, 00:21:29.568 "max_latency_us": 14537.076363636364 00:21:29.568 } 00:21:29.568 ], 00:21:29.568 "core_count": 1 00:21:29.568 } 00:21:29.568 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:29.568 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:29.568 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:29.568 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:29.568 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:29.568 | select(.opcode=="crc32c") 00:21:29.568 | "\(.module_name) \(.executed)"' 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96421 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96421 ']' 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96421 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96421 00:21:29.827 killing process with pid 96421 00:21:29.827 Received shutdown signal, test time was about 2.000000 seconds 00:21:29.827 00:21:29.827 Latency(us) 00:21:29.827 [2024-12-16T14:37:22.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.827 [2024-12-16T14:37:22.027Z] =================================================================================================================== 00:21:29.827 [2024-12-16T14:37:22.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.827 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96421' 00:21:29.828 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96421 00:21:29.828 14:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96421 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=96471 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 96471 /var/tmp/bperf.sock 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 96471 ']' 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.087 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:30.087 [2024-12-16 14:37:22.151325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:30.087 [2024-12-16 14:37:22.151580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96471 ] 00:21:30.087 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:30.087 Zero copy mechanism will not be used. 00:21:30.346 [2024-12-16 14:37:22.291067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.346 [2024-12-16 14:37:22.310478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.346 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.346 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:30.346 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:30.346 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:30.346 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:30.605 [2024-12-16 14:37:22.624449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:30.605 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.605 14:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.864 nvme0n1 00:21:30.864 14:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:30.864 14:37:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:31.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.122 Zero copy mechanism will not be used. 00:21:31.122 Running I/O for 2 seconds... 00:21:32.999 7321.00 IOPS, 915.12 MiB/s [2024-12-16T14:37:25.199Z] 7346.50 IOPS, 918.31 MiB/s 00:21:32.999 Latency(us) 00:21:32.999 [2024-12-16T14:37:25.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.999 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:33.000 nvme0n1 : 2.00 7343.03 917.88 0.00 0.00 2173.73 1586.27 10307.03 00:21:33.000 [2024-12-16T14:37:25.200Z] =================================================================================================================== 00:21:33.000 [2024-12-16T14:37:25.200Z] Total : 7343.03 917.88 0.00 0.00 2173.73 1586.27 10307.03 00:21:33.000 { 00:21:33.000 "results": [ 00:21:33.000 { 00:21:33.000 "job": "nvme0n1", 00:21:33.000 "core_mask": "0x2", 00:21:33.000 "workload": "randwrite", 00:21:33.000 "status": "finished", 00:21:33.000 "queue_depth": 16, 00:21:33.000 "io_size": 131072, 00:21:33.000 "runtime": 2.003125, 00:21:33.000 "iops": 7343.026521060842, 00:21:33.000 "mibps": 917.8783151326053, 00:21:33.000 "io_failed": 0, 00:21:33.000 "io_timeout": 0, 00:21:33.000 "avg_latency_us": 2173.7274717396276, 00:21:33.000 "min_latency_us": 1586.269090909091, 00:21:33.000 "max_latency_us": 10307.025454545455 00:21:33.000 } 00:21:33.000 ], 00:21:33.000 "core_count": 1 00:21:33.000 } 00:21:33.000 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:33.000 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:33.000 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:33.000 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:33.000 | select(.opcode=="crc32c") 00:21:33.000 | "\(.module_name) \(.executed)"' 00:21:33.000 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 96471 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96471 ']' 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96471 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96471 00:21:33.258 killing process with pid 96471 00:21:33.258 Received shutdown signal, test time was about 2.000000 seconds 00:21:33.258 00:21:33.258 Latency(us) 00:21:33.258 [2024-12-16T14:37:25.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.258 [2024-12-16T14:37:25.458Z] =================================================================================================================== 00:21:33.258 [2024-12-16T14:37:25.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96471' 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96471 00:21:33.258 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96471 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 96296 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 96296 ']' 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 96296 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96296 00:21:33.517 killing process with pid 96296 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96296' 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 96296 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 96296 00:21:33.517 ************************************ 00:21:33.517 END TEST nvmf_digest_clean 00:21:33.517 ************************************ 00:21:33.517 00:21:33.517 real 0m15.138s 00:21:33.517 user 0m29.645s 00:21:33.517 sys 0m4.269s 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.517 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:33.776 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:33.776 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.777 ************************************ 00:21:33.777 START TEST nvmf_digest_error 00:21:33.777 ************************************ 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=96545 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 96545 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96545 ']' 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.777 14:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.777 [2024-12-16 14:37:25.808027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:33.777 [2024-12-16 14:37:25.808134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.777 [2024-12-16 14:37:25.955806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.777 [2024-12-16 14:37:25.973159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.777 [2024-12-16 14:37:25.973234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.777 [2024-12-16 14:37:25.973260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.777 [2024-12-16 14:37:25.973267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.777 [2024-12-16 14:37:25.973273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.777 [2024-12-16 14:37:25.973633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.036 [2024-12-16 14:37:26.078060] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.036 [2024-12-16 14:37:26.113528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.036 null0 00:21:34.036 [2024-12-16 14:37:26.145098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.036 [2024-12-16 14:37:26.169214] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96571 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96571 /var/tmp/bperf.sock 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96571 ']' 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.036 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.036 [2024-12-16 14:37:26.233630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:34.036 [2024-12-16 14:37:26.233916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96571 ] 00:21:34.295 [2024-12-16 14:37:26.379536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.295 [2024-12-16 14:37:26.397909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.295 [2024-12-16 14:37:26.424984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.295 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.295 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:34.295 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:34.295 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:34.861 14:37:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:34.861 nvme0n1 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:35.120 14:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.120 Running I/O for 2 seconds... 00:21:35.120 [2024-12-16 14:37:27.197665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.197892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.212183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.212219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.212247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.226299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.226334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.240371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.240405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.240432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.255075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.255267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.255302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.269410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.269470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.269498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.283557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.283590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.283618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.297506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.297579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.120 [2024-12-16 14:37:27.311609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.120 [2024-12-16 14:37:27.311641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.120 [2024-12-16 14:37:27.311669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.327300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.327333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.327361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.341483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.341681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.341698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.355937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.355970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.355998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.369949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.369981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.370009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.384053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.384085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.384114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.398150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.398183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.398210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.412487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.379 [2024-12-16 14:37:27.412521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.379 [2024-12-16 14:37:27.412549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.379 [2024-12-16 14:37:27.426504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.426535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.426563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.440622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.440823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.440855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.455034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.455249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.455281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.469361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.469587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.469605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.483745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.483778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.483806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.497755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.497787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.497815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.512152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.512185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.512213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.526340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.526376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.526403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.540423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.540479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.540507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.554409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.554466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.554494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.380 [2024-12-16 14:37:27.568456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.380 [2024-12-16 14:37:27.568488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.380 [2024-12-16 14:37:27.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.583875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.583907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.583934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.598049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.598081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.598108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.612275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.612309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.612337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.626509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.626542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.626569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.640670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.640888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.640920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.655436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.655638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.655657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.669786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.669819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.669847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.684149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.684182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.684211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.698354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.698387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.698415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.712665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.712711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.712740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.727064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.727303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.727320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.741500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.741533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.741561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.756218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.756283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.756302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.770575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.770612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.770640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.784672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.784706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.784734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.798588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.798768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.798801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.813045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.813241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.813259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.639 [2024-12-16 14:37:27.827719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.639 [2024-12-16 14:37:27.827751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.639 [2024-12-16 14:37:27.827780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.843535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.843570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.843599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.860231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.860266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.860295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.876938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.876970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.876998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.892499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.892560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.892588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.906990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.907025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.907054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.921102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.921134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.921162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.935417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.935505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.949341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.949374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.949401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.963594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.963626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.963654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.977713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.977745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.977772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:27.991850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.898 [2024-12-16 14:37:27.991882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.898 [2024-12-16 14:37:27.991910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.898 [2024-12-16 14:37:28.005992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.006023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.006051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.020161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.020194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.020221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.034320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.034351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.034379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.048446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.048478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.062466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.062499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.062526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.076527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.076558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.076586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.899 [2024-12-16 14:37:28.090406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:35.899 [2024-12-16 14:37:28.090479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.899 [2024-12-16 14:37:28.090492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.112562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.112625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.129198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.129233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.129262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.145802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.145852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.145880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.160812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.160877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.160905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.177151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.177200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.177228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 17332.00 IOPS, 67.70 MiB/s [2024-12-16T14:37:28.358Z] [2024-12-16 14:37:28.192373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.192422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.192475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.207719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.207768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.207795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.222719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.222771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.222800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.238095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.238144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.238171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.253257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.253306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.253333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.268709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.268760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.268787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.284485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.284551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.284587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.299302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.299354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.299383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.313391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.313467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.313482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.327756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.327805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.327832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.158 [2024-12-16 14:37:28.341748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.158 [2024-12-16 14:37:28.341796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.158 [2024-12-16 14:37:28.341823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.356541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.356591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.356620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.371231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.371296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.385610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.385658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.385686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.399824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.399871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.399899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.414019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.414068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.414095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.417 [2024-12-16 14:37:28.428179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.417 [2024-12-16 14:37:28.428226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.417 [2024-12-16 14:37:28.428253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.442265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.442315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.442342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.456410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.456484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.456513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.470392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.470463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.470476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.484546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.484594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.484622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.498548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.498594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.498621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.512613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.512660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.512688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.526764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.526810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.526837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.540782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.540829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.540857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.554878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.554942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.554970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.568846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.568893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.568919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.582935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.582984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.583012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.596908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.596955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.596982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.418 [2024-12-16 14:37:28.610934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.418 [2024-12-16 14:37:28.610984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.418 [2024-12-16 14:37:28.611011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.626443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.626498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.626525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.640567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.640614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.640641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.654895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.654946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.654974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.668944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.668992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.669019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.683049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.683099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.683127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.697053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.697100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.697128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.711161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.725357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.725404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.725431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.677 [2024-12-16 14:37:28.739411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.677 [2024-12-16 14:37:28.739467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.677 [2024-12-16 14:37:28.739495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.753484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.753541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.753570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.767549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.767596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.767622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.784282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.784334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.784347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.803690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.803725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.803737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.818613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.818708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.833333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.833384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.833413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.847416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.847473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.847501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.678 [2024-12-16 14:37:28.861562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.678 [2024-12-16 14:37:28.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.678 [2024-12-16 14:37:28.861639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.878503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.878561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.878591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.895694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.895732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.895761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.911899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.911947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.911974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.926787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.926846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.940772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.940818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.940845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.954762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.954810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.954838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.968853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.968900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.968928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.983193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.983255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.983267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.937 [2024-12-16 14:37:28.997326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.937 [2024-12-16 14:37:28.997374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.937 [2024-12-16 14:37:28.997402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.011539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.011612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.025523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.025571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.039615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.039662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.039689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.059655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.059703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.059730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.073756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.073802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.073829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.087862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.087910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.087937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.101990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.102037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.102064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.116254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.116301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.938 [2024-12-16 14:37:29.130423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:36.938 [2024-12-16 14:37:29.130494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.938 [2024-12-16 14:37:29.130522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-12-16 14:37:29.145696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:37.202 [2024-12-16 14:37:29.145743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-12-16 14:37:29.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-12-16 14:37:29.159920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:37.203 [2024-12-16 14:37:29.159966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-12-16 14:37:29.159994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-12-16 14:37:29.173936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b5ba0) 00:21:37.203 [2024-12-16 14:37:29.173983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-12-16 14:37:29.174011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 17394.50 IOPS, 67.95 MiB/s 00:21:37.203 Latency(us) 00:21:37.203 [2024-12-16T14:37:29.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.203 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:37.203 nvme0n1 : 2.01 17398.57 67.96 0.00 0.00 7352.06 6613.18 30742.34 00:21:37.203 [2024-12-16T14:37:29.403Z] =================================================================================================================== 00:21:37.203 [2024-12-16T14:37:29.403Z] Total : 17398.57 67.96 0.00 0.00 7352.06 6613.18 30742.34 00:21:37.203 { 00:21:37.203 "results": [ 00:21:37.203 { 00:21:37.203 "job": "nvme0n1", 00:21:37.203 "core_mask": "0x2", 00:21:37.203 "workload": "randread", 00:21:37.203 "status": "finished", 00:21:37.203 "queue_depth": 128, 00:21:37.203 "io_size": 4096, 00:21:37.203 "runtime": 2.006889, 00:21:37.203 "iops": 17398.570623487398, 00:21:37.203 "mibps": 67.96316649799765, 00:21:37.203 "io_failed": 0, 00:21:37.203 "io_timeout": 0, 00:21:37.203 "avg_latency_us": 7352.056752975238, 00:21:37.203 "min_latency_us": 6613.178181818182, 00:21:37.203 "max_latency_us": 30742.34181818182 00:21:37.203 } 00:21:37.203 ], 00:21:37.203 "core_count": 1 00:21:37.203 } 00:21:37.203 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:37.203 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:37.203 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:37.203 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:37.203 | .driver_specific 00:21:37.203 | .nvme_error 00:21:37.203 | .status_code 00:21:37.203 | .command_transient_transport_error' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96571 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96571 ']' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96571 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96571 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.506 killing process with pid 96571 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96571' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96571 00:21:37.506 Received shutdown signal, test time was about 2.000000 seconds 00:21:37.506 00:21:37.506 Latency(us) 00:21:37.506 [2024-12-16T14:37:29.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.506 [2024-12-16T14:37:29.706Z] =================================================================================================================== 00:21:37.506 [2024-12-16T14:37:29.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96571 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96618 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96618 /var/tmp/bperf.sock 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96618 ']' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.506 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.506 [2024-12-16 14:37:29.673979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:37.507 [2024-12-16 14:37:29.674106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96618 ] 00:21:37.507 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:37.507 Zero copy mechanism will not be used. 00:21:37.765 [2024-12-16 14:37:29.815710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.765 [2024-12-16 14:37:29.834144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.765 [2024-12-16 14:37:29.861567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:37.765 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.765 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:37.765 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:37.765 14:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:38.024 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:38.590 nvme0n1 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:38.590 14:37:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:38.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:38.590 Zero copy mechanism will not be used. 00:21:38.590 Running I/O for 2 seconds... 00:21:38.590 [2024-12-16 14:37:30.670839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.670943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.670960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.674970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.675011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.675025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.679119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.679161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.679189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.683312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.683362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.683385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.687344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.687421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.691512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.691570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.691600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.695517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.695561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.695590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.699428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.699502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.703452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.703512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.703540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.707296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.707343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.707370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.711133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.711200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.711246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.715062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.715114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.715127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.718804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.718862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.590 [2024-12-16 14:37:30.718909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.590 [2024-12-16 14:37:30.722644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.590 [2024-12-16 14:37:30.722694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.722706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.726458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.726504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.726515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.730398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.730484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.734268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.734317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.738065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.738113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.738140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.741909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.741957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.741983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.745704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.745751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.749540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.749587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.749614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.753398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.753470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.753483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.757185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.757233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.757260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.761067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.761115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.761142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.764941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.764988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.765015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.768757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.768803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.768830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.772516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.772562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.772589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.776294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.776341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.776368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.780141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.780188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.780215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.591 [2024-12-16 14:37:30.784043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.591 [2024-12-16 14:37:30.784093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.591 [2024-12-16 14:37:30.784122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.788420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.788495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.788523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.792856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.792921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.792949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.796723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.796770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.796797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.800699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.800746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.800773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.804526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.804572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.804600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.808343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.808390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.808417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.812189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.812236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.851 [2024-12-16 14:37:30.812263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.851 [2024-12-16 14:37:30.816080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.851 [2024-12-16 14:37:30.816127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.816154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.819963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.820011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.820038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.823847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.823895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.823922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.827797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.827859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.827886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.831550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.831610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.835251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.835314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.835341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.839058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.839107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.839119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.842777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.842824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.842850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.846530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.846576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.846603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.850303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.850378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.854084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.854131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.854159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.857866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.857913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.857939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.861676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.861722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.861749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.865501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.865548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.865575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.869294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.869341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.869367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.873249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.873296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.873322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.876977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.877024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.880811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.880857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.880884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.884680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.884728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.884755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.888462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.888508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.888535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.892275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.892350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.896079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.896126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.896153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.899915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.899961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.899988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.903711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.903758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.903785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.907582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.907628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.907656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.911833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.911880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.911907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.916155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.916204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.852 [2024-12-16 14:37:30.916232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.852 [2024-12-16 14:37:30.920529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.852 [2024-12-16 14:37:30.920580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.920608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.925036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.925086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.925115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.929942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.929993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.930022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.934240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.934289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.938558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.938595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.938624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.942702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.942738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.942767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.946863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.946922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.946935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.950993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.951030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.951059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.954842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.954909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.954937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.958525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.958556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.958584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.962163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.962210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.962237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.965979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.966025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.969707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.969754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.973745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.973792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.973818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.977601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.977647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.977674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.981526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.981571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.981598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.985446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.985493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.985520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.989253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.989300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.989326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.993127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.993200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:30.997039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:30.997085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:30.997112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:31.000758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:31.000804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:31.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:31.004610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:31.004656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:31.004683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:31.008471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:31.008503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:31.008530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:31.012243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.853 [2024-12-16 14:37:31.012290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.853 [2024-12-16 14:37:31.012317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.853 [2024-12-16 14:37:31.016119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.016166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.016192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.019948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.019994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.020021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.023696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.023743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.023770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.027565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.027612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.027640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.031332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.031379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.031406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.035159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.035196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.035239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.039166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.039233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.039261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.043166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.043202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.043215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.854 [2024-12-16 14:37:31.047653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:38.854 [2024-12-16 14:37:31.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.854 [2024-12-16 14:37:31.047728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.114 [2024-12-16 14:37:31.051809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.114 [2024-12-16 14:37:31.051858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.114 [2024-12-16 14:37:31.051885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.114 [2024-12-16 14:37:31.056115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.114 [2024-12-16 14:37:31.056164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.114 [2024-12-16 14:37:31.056192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.114 [2024-12-16 14:37:31.060167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.060215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.060243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.064152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.064200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.064228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.068154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.068202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.068229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.072016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.072064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.072090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.075930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.075977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.076004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.079807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.079854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.079882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.083603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.083648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.083675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.087414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.087469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.087497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.091237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.091300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.091327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.095054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.095087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.095115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.098768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.098800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.098827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.102405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.102459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.102487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.106219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.106266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.106293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.110186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.110234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.110261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.114020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.114066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.114094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.117931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.117977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.121784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.121831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.121859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.125574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.125620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.125646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.129483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.129529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.129556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.133278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.133353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.137194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.137242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.137269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.141008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.141055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.141081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.144904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.144952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.144979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.148669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.148715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.148742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.152468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.152525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.156338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.156385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.156413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.160097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.160144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.115 [2024-12-16 14:37:31.160171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.115 [2024-12-16 14:37:31.163877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.115 [2024-12-16 14:37:31.163924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.163951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.168369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.168461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.168481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.173069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.173130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.173162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.177216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.177268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.177297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.180999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.181047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.181075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.184785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.184833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.184860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.188533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.188579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.188607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.192314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.192361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.192389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.196283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.196332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.196359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.200097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.200144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.200172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.203877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.203924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.203951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.207613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.207646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.207673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.211337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.211384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.211411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.215078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.215112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.215140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.218859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.218914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.218942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.222590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.222636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.222663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.226330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.226378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.226405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.230156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.230203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.230230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.234020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.234067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.234095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.237862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.237910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.237937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.241611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.241657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.241685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.245405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.245462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.245490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.249228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.249275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.249302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.253094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.253142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.256980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.257026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.257053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.260799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.260858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.264513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.264542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.264569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.268233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.116 [2024-12-16 14:37:31.268280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.116 [2024-12-16 14:37:31.268307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.116 [2024-12-16 14:37:31.271982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.272028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.272054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.275845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.275891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.275919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.279712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.279743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.279771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.283407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.283463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.283491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.287189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.287265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.287292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.291052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.291087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.291115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.294847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.294916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.294945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.298619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.298665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.298692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.302261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.302308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.302334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.306023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.306069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.306096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.117 [2024-12-16 14:37:31.310384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.117 [2024-12-16 14:37:31.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.117 [2024-12-16 14:37:31.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.314578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.377 [2024-12-16 14:37:31.314624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.377 [2024-12-16 14:37:31.314652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.318580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.377 [2024-12-16 14:37:31.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.377 [2024-12-16 14:37:31.318669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.322460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.377 [2024-12-16 14:37:31.322506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.377 [2024-12-16 14:37:31.322533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.326255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.377 [2024-12-16 14:37:31.326302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.377 [2024-12-16 14:37:31.326330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.330066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.377 [2024-12-16 14:37:31.330113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.377 [2024-12-16 14:37:31.330140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.377 [2024-12-16 14:37:31.333992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.334039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.334066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.337849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.337896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.337923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.341560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.341593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.341620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.345247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.345293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.345320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.349133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.349180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.349207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.353021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.353067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.353093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.356907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.356953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.356980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.360698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.360745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.360772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.364460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.364506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.364532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.368175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.368222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.368250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.371981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.372028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.372055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.375837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.375884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.379640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.379672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.379699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.383497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.387203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.387251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.387292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.391040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.391074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.391086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.394665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.394697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.394725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.398396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.398478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.402195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.402242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.402268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.406000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.406047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.406074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.409862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.409909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.409936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.413579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.413624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.413651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.417412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.417467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.421221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.421268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.425104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.425152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.425179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.429083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.429130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.429157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.432857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.432931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.378 [2024-12-16 14:37:31.436699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.378 [2024-12-16 14:37:31.436746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.378 [2024-12-16 14:37:31.436773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.440539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.440585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.444372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.444419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.444457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.448601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.448648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.448675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.452408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.452467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.452495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.456335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.456383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.456410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.460249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.460296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.460323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.464156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.464203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.464230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.468004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.468051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.468078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.471800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.471833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.471859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.475637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.475669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.475696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.479381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.479453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.479465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.483210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.483254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.483295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.487038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.487072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.487099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.490760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.490807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.490834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.494465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.494539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.498222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.498270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.498297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.501956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.502003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.502030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.505804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.505851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.505878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.509590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.509653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.509680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.513482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.513528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.513555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.517301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.517348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.517375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.521229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.521276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.521303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.525117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.525164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.525191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.529010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.529057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.529084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.532772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.532805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.532833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.536557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.536590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.536618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.540239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.379 [2024-12-16 14:37:31.540286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.379 [2024-12-16 14:37:31.540314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.379 [2024-12-16 14:37:31.544100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.544147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.544174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.548055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.548103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.551874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.551948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.555623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.555669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.555696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.559362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.559408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.559435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.563104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.563138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.563165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.566752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.566798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.566825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.380 [2024-12-16 14:37:31.570543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.380 [2024-12-16 14:37:31.570592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.380 [2024-12-16 14:37:31.570621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.574751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.574814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.574841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.578617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.578663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.578690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.582749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.582796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.582823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.586506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.586552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.586579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.590273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.590320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.590347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.594109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.594155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.594182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.598054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.598101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.598128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.601870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.601917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.601944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.605599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.605631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.605658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.609314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.609362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.609389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.613116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.613163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.613191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.616999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.617033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.617060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.620727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.620775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.620802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.624551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.624597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.624624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.628631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.628680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.628692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.632527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.632573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.632600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.636271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.636319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.636345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.640190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.640237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.640264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.644107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.644154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.644181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.648023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.648069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.648096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.651818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.651864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.651891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.655535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.655608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.659283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.659330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.659357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.640 [2024-12-16 14:37:31.663082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.640 [2024-12-16 14:37:31.663118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.640 [2024-12-16 14:37:31.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.666956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.666990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.667002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 7982.00 IOPS, 997.75 MiB/s [2024-12-16T14:37:31.841Z] [2024-12-16 14:37:31.672248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.672297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.672324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.676156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.676203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.676231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.680132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.680165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.684106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.684154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.684181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.688059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.688106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.688133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.692250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.692328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.692346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.697285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.697349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.697382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.701961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.702012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.702041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.706003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.706052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.706079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.710347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.710396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.710424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.714691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.714741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.714785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.719160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.719199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.719232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.723623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.723674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.723703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.728019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.728069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.728096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.732206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.732254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.732282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.736263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.736312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.736340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.740501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.740548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.740576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.744438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.744510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.744538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.748467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.748511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.748539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.752387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.752459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.752472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.756426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.756482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.756510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.760315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.760363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.760390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.764240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.764288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.768182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.768230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.768258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.772168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.772216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.641 [2024-12-16 14:37:31.772244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.641 [2024-12-16 14:37:31.776186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.641 [2024-12-16 14:37:31.776234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.776262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.780088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.780121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.780149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.783996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.784047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.784076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.787906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.791732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.791781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.791808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.795687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.795734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.795761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.799781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.799814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.799842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.803811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.803844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.803872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.807635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.807683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.807710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.811437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.811496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.811523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.815529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.815576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.815603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.819500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.819558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.819586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.823280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.827188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.827269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.827297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.831341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.831388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.831416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.642 [2024-12-16 14:37:31.835484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.642 [2024-12-16 14:37:31.835543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.642 [2024-12-16 14:37:31.835572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.839695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.839743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.839770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.843759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.843823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.843851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.847844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.847892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.847920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.851631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.851678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.851706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.855523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.855571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.855598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.859636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.859668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.859695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.863543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.863590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.863617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.867421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.867478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.867506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.871398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.871469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.871481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.875495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.875566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.875594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.879460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.879516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.879545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.883324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.883371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.883398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.902 [2024-12-16 14:37:31.887104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.902 [2024-12-16 14:37:31.887155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.902 [2024-12-16 14:37:31.887167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.890797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.890878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.894606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.894653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.898362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.898409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.898436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.902238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.902285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.902312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.906065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.906112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.906139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.909831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.909878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.909905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.913563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.913608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.913635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.917464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.917508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.917535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.921387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.921460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.921473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.925148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.925195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.925221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.929061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.929109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.929136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.933149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.933197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.933225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.937403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.937479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.937508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.941706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.941742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.941770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.945836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.945884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.950338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.950388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.954832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.954865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.954916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.958939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.958975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.959003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.963062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.963099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.963129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.967320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.967363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.967390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.971433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.971519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.975568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.975618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.975647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.979978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.980026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.980054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.983879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.983953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.987646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.987694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.987721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.991422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.991494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.991523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.995376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.995424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.903 [2024-12-16 14:37:31.995477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.903 [2024-12-16 14:37:31.999158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.903 [2024-12-16 14:37:31.999207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:31.999235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.002857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.002929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.002942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.006648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.006695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.006722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.010626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.010685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.014262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.014309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.014336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.018069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.018115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.018143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.021884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.021932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.025714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.025762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.025788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.029641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.029688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.029715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.033415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.033470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.033498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.037221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.037269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.037296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.041183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.041232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.041259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.045034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.045082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.045109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.048869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.048916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.048943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.052741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.052789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.052816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.056562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.056609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.056636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.060429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.060486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.060514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.064214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.064261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.064288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.068047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.068095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.071886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.071933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.071960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.075669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.075716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.075744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.079509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.079567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.079596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.083406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.083479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.083507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.087173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.087252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.087279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.091045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.091080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.091109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.094797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.094844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.094879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.904 [2024-12-16 14:37:32.099074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:39.904 [2024-12-16 14:37:32.099125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.904 [2024-12-16 14:37:32.099153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.103094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.103146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.103174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.107422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.107522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.111279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.111341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.111368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.115067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.115109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.115136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.118791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.118838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.118864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.122694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.122742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.122769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.126410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.126467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.130289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.130336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.130364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.134136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.134183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.134211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.137932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.137979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.138007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.141741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.141789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.141816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.145570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.145616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.145643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.149361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.149407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.149435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.153111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.153158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.153185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.156934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.156981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.157008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.160890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.160923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.160951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.164628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.164675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.164702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.168350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.168397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.168424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.172121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.172168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.176077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.176126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.176154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.179929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.179976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.180004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.165 [2024-12-16 14:37:32.183630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.165 [2024-12-16 14:37:32.183677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.165 [2024-12-16 14:37:32.183704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.187467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.187523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.187551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.191299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.191346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.191373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.195079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.195128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.195156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.198835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.198905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.198933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.202617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.202663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.202690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.206335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.206381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.206408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.210112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.210159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.210186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.214003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.214049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.214076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.217825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.217857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.217884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.222200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.222277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.222295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.226845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.226933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.226969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.231381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.231471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.235278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.235327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.235355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.239034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.239069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.239097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.242782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.242829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.246525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.246572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.250251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.250300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.250327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.254039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.254087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.254113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.257768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.257814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.257841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.261670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.261718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.261758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.265472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.265519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.265546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.269232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.269279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.269306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.273119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.273166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.273193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.276972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.277020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.277047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.280793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.280857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.166 [2024-12-16 14:37:32.280884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.166 [2024-12-16 14:37:32.284705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.166 [2024-12-16 14:37:32.284752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.284779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.288479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.288526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.288563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.292283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.292331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.292358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.296137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.296185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.296212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.299951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.299998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.300025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.303708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.303756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.303783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.307509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.307565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.307593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.311276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.311308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.311335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.315064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.315100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.315128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.318897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.318960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.318988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.322716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.322763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.322790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.326450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.326496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.326524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.330245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.330293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.330319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.334041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.334088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.334115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.337830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.337904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.341666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.341739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.345512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.345557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.345584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.349286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.349333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.349360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.353106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.353153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.356946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.356992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.357019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.167 [2024-12-16 14:37:32.361333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.167 [2024-12-16 14:37:32.361366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.167 [2024-12-16 14:37:32.361394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.365454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.365486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.365513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.369624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.369670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.369697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.373488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.373535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.373562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.377290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.377337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.377364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.381190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.381237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.381264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.385099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.385145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.385172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.388932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.388979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.389006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.392800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.392864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.392892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.396602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.396648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.396675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.400411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.400468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.404242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.404290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.404317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.408105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.408152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.408179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.412085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.412132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.412159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.415883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.415930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.415957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.419680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.419727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.419755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.423383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.423455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.423484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.427086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.427164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.430993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.431028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.431056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.434736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.434783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.434810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.438548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.438594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.438621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.442324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.442372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.442399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.446116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.446163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.446190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.449974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.450048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.453751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.453798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.453824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.457543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.457590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.457617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.461365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.428 [2024-12-16 14:37:32.461412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.428 [2024-12-16 14:37:32.461439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.428 [2024-12-16 14:37:32.465105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.465152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.465179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.468919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.468966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.468993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.472801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.472848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.472875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.476704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.476752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.476779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.480525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.480572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.484287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.484334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.484361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.488133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.488180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.488208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.492123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.492171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.492198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.495958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.496004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.496032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.499703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.499751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.499778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.503412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.503482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.503510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.507183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.507248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.507275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.511008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.511057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.511084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.514744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.514790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.518518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.518564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.518591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.522402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.522472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.522501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.526125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.526172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.526199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.529964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.530011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.530038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.533694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.533741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.533768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.537515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.537561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.537588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.541317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.541364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.541391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.545179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.545226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.545253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.549087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.549135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.549162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.552934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.552981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.553008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.556762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.556809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.556836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.560505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.560551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.560578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.564238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.564285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.564312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.429 [2024-12-16 14:37:32.568050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.429 [2024-12-16 14:37:32.568096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.429 [2024-12-16 14:37:32.568123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.571927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.572001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.575691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.575739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.575766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.579390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.579459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.579472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.583146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.583182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.583224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.586953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.587002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.587030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.590606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.590652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.590679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.594290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.594337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.594364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.598077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.598123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.598150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.601926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.601973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.602001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.605729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.605776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.605803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.609516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.609562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.609589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.613313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.613360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.613387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.617137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.617184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.617211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.430 [2024-12-16 14:37:32.621080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.430 [2024-12-16 14:37:32.621143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.430 [2024-12-16 14:37:32.621171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.625404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.625460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.625488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.629390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.629460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.629473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.633572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.633619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.633646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.637460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.637506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.637534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.641220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.641267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.641294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.645130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.645177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.645204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.649057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.649104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.649131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.689 [2024-12-16 14:37:32.652962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.689 [2024-12-16 14:37:32.653009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.689 [2024-12-16 14:37:32.653037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.690 [2024-12-16 14:37:32.656723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.690 [2024-12-16 14:37:32.656770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.690 [2024-12-16 14:37:32.656797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.690 [2024-12-16 14:37:32.660590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.690 [2024-12-16 14:37:32.660637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.690 [2024-12-16 14:37:32.660664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.690 [2024-12-16 14:37:32.664404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.690 [2024-12-16 14:37:32.664461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.690 [2024-12-16 14:37:32.664490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.690 [2024-12-16 14:37:32.669501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21fb9e0) 00:21:40.690 [2024-12-16 14:37:32.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.690 [2024-12-16 14:37:32.669574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.690 7951.50 IOPS, 993.94 MiB/s 00:21:40.690 Latency(us) 00:21:40.690 [2024-12-16T14:37:32.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.690 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:40.690 nvme0n1 : 2.00 7952.70 994.09 0.00 0.00 2008.84 1675.64 5391.83 00:21:40.690 [2024-12-16T14:37:32.890Z] =================================================================================================================== 00:21:40.690 [2024-12-16T14:37:32.890Z] Total : 7952.70 994.09 0.00 0.00 2008.84 1675.64 5391.83 00:21:40.690 { 00:21:40.690 "results": [ 00:21:40.690 { 00:21:40.690 "job": "nvme0n1", 00:21:40.690 "core_mask": "0x2", 00:21:40.690 "workload": "randread", 00:21:40.690 "status": "finished", 00:21:40.690 "queue_depth": 16, 00:21:40.690 "io_size": 131072, 00:21:40.690 "runtime": 2.003596, 00:21:40.690 "iops": 7952.701043523744, 00:21:40.690 "mibps": 994.087630440468, 00:21:40.690 "io_failed": 0, 00:21:40.690 "io_timeout": 0, 00:21:40.690 "avg_latency_us": 2008.8429279870375, 00:21:40.690 "min_latency_us": 1675.6363636363637, 00:21:40.690 "max_latency_us": 5391.825454545455 00:21:40.690 } 00:21:40.690 ], 00:21:40.690 "core_count": 1 00:21:40.690 } 00:21:40.690 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:40.690 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:40.690 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:40.690 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:40.690 | .driver_specific 00:21:40.690 | .nvme_error 00:21:40.690 | .status_code 00:21:40.690 | .command_transient_transport_error' 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 514 > 0 )) 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96618 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96618 ']' 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96618 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.949 14:37:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96618 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:40.949 killing process with pid 96618 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96618' 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96618 00:21:40.949 Received shutdown signal, test time was about 2.000000 seconds 00:21:40.949 00:21:40.949 Latency(us) 00:21:40.949 [2024-12-16T14:37:33.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.949 [2024-12-16T14:37:33.149Z] =================================================================================================================== 00:21:40.949 [2024-12-16T14:37:33.149Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96618 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96671 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96671 /var/tmp/bperf.sock 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96671 ']' 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:40.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.949 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.208 [2024-12-16 14:37:33.165943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:41.208 [2024-12-16 14:37:33.166045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96671 ] 00:21:41.208 [2024-12-16 14:37:33.310292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.208 [2024-12-16 14:37:33.328691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.208 [2024-12-16 14:37:33.355561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:41.208 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.208 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:41.208 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:41.208 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:41.775 nvme0n1 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.775 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:42.034 14:37:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:42.034 Running I/O for 2 seconds... 00:21:42.034 [2024-12-16 14:37:34.115338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7100 00:21:42.034 [2024-12-16 14:37:34.117027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.034 [2024-12-16 14:37:34.117082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:42.034 [2024-12-16 14:37:34.131808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7970 00:21:42.034 [2024-12-16 14:37:34.133583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.034 [2024-12-16 14:37:34.133612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.146898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef81e0 00:21:42.035 [2024-12-16 14:37:34.148592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.161333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef8a50 00:21:42.035 [2024-12-16 14:37:34.162820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.163048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.175917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef92c0 00:21:42.035 [2024-12-16 14:37:34.177377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.190570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef9b30 00:21:42.035 [2024-12-16 14:37:34.192098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.192131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.204683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efa3a0 00:21:42.035 [2024-12-16 14:37:34.206067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.206098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:42.035 [2024-12-16 14:37:34.218803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efac10 00:21:42.035 [2024-12-16 14:37:34.220522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.035 [2024-12-16 14:37:34.220559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.233920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efb480 00:21:42.294 [2024-12-16 14:37:34.235783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.235817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.248828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efbcf0 00:21:42.294 [2024-12-16 14:37:34.250422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.250476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.263116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efc560 00:21:42.294 [2024-12-16 14:37:34.264770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.264802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.277582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efcdd0 00:21:42.294 [2024-12-16 14:37:34.278961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.279011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.291870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efd640 00:21:42.294 [2024-12-16 14:37:34.293170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.293216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.306191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efdeb0 00:21:42.294 [2024-12-16 14:37:34.307606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.320765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efe720 00:21:42.294 [2024-12-16 14:37:34.322114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.322145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.334394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eff3c8 00:21:42.294 [2024-12-16 14:37:34.335708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.335755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.353925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eff3c8 00:21:42.294 [2024-12-16 14:37:34.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.356324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.367503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efe720 00:21:42.294 [2024-12-16 14:37:34.369744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.294 [2024-12-16 14:37:34.369776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:42.294 [2024-12-16 14:37:34.380917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efdeb0 00:21:42.295 [2024-12-16 14:37:34.383135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.383196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.394232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efd640 00:21:42.295 [2024-12-16 14:37:34.396550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.396593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.407606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efcdd0 00:21:42.295 [2024-12-16 14:37:34.409844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.409890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.421357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efc560 00:21:42.295 [2024-12-16 14:37:34.423625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.423671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.434796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efbcf0 00:21:42.295 [2024-12-16 14:37:34.437042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.437086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.448263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efb480 00:21:42.295 [2024-12-16 14:37:34.450411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.450461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.461667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efac10 00:21:42.295 [2024-12-16 14:37:34.463817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.463861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.475095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efa3a0 00:21:42.295 [2024-12-16 14:37:34.477210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.477256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:42.295 [2024-12-16 14:37:34.488672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef9b30 00:21:42.295 [2024-12-16 14:37:34.491077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.295 [2024-12-16 14:37:34.491125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.503112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef92c0 00:21:42.554 [2024-12-16 14:37:34.505304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.505334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.516806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef8a50 00:21:42.554 [2024-12-16 14:37:34.518846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.518912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.530362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef81e0 00:21:42.554 [2024-12-16 14:37:34.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.532546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.544012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7970 00:21:42.554 [2024-12-16 14:37:34.546023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.546067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.557324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7100 00:21:42.554 [2024-12-16 14:37:34.559444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.559493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.570690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef6890 00:21:42.554 [2024-12-16 14:37:34.572713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.572757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.584147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef6020 00:21:42.554 [2024-12-16 14:37:34.586117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.586160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.597516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef57b0 00:21:42.554 [2024-12-16 14:37:34.599512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.599556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.610793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef4f40 00:21:42.554 [2024-12-16 14:37:34.612787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.624286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef46d0 00:21:42.554 [2024-12-16 14:37:34.626243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.626286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:42.554 [2024-12-16 14:37:34.637748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef3e60 00:21:42.554 [2024-12-16 14:37:34.639704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.554 [2024-12-16 14:37:34.639749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.651085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef35f0 00:21:42.555 [2024-12-16 14:37:34.653000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.653044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.664479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef2d80 00:21:42.555 [2024-12-16 14:37:34.666383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.666466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.677859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef2510 00:21:42.555 [2024-12-16 14:37:34.679825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.679869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.691395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef1ca0 00:21:42.555 [2024-12-16 14:37:34.693180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.693226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.704707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef1430 00:21:42.555 [2024-12-16 14:37:34.706513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.706558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.718146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef0bc0 00:21:42.555 [2024-12-16 14:37:34.720099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.720145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.731989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef0350 00:21:42.555 [2024-12-16 14:37:34.733763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.733808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:42.555 [2024-12-16 14:37:34.745311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eefae0 00:21:42.555 [2024-12-16 14:37:34.747187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.555 [2024-12-16 14:37:34.747233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.760143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eef270 00:21:42.814 [2024-12-16 14:37:34.761963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.773682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeea00 00:21:42.814 [2024-12-16 14:37:34.775540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.775585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.787193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eee190 00:21:42.814 [2024-12-16 14:37:34.788996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.789041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.800644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eed920 00:21:42.814 [2024-12-16 14:37:34.802355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.802398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.813941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eed0b0 00:21:42.814 [2024-12-16 14:37:34.815730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.815761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.827378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eec840 00:21:42.814 [2024-12-16 14:37:34.829088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.829132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.840810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eebfd0 00:21:42.814 [2024-12-16 14:37:34.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.842513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.854071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeb760 00:21:42.814 [2024-12-16 14:37:34.855815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.855846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.867497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeaef0 00:21:42.814 [2024-12-16 14:37:34.869098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.869142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.882476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eea680 00:21:42.814 [2024-12-16 14:37:34.884191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.884241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.896601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee9e10 00:21:42.814 [2024-12-16 14:37:34.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.898257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.910285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee95a0 00:21:42.814 [2024-12-16 14:37:34.911947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.911992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.924082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee8d30 00:21:42.814 [2024-12-16 14:37:34.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.925738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.937669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee84c0 00:21:42.814 [2024-12-16 14:37:34.939234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.939295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.950996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee7c50 00:21:42.814 [2024-12-16 14:37:34.952560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.952591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.964386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee73e0 00:21:42.814 [2024-12-16 14:37:34.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.965957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.977742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee6b70 00:21:42.814 [2024-12-16 14:37:34.979421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.979489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:34.991974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee6300 00:21:42.814 [2024-12-16 14:37:34.993483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:34.993517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:42.814 [2024-12-16 14:37:35.006766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee5a90 00:21:42.814 [2024-12-16 14:37:35.008507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.814 [2024-12-16 14:37:35.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.077 [2024-12-16 14:37:35.023835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee5220 00:21:43.078 [2024-12-16 14:37:35.025408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.025475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.039085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee49b0 00:21:43.078 [2024-12-16 14:37:35.040678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.040723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.053081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee4140 00:21:43.078 [2024-12-16 14:37:35.054480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.054551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.066511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee38d0 00:21:43.078 [2024-12-16 14:37:35.067998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.068043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.079947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee3060 00:21:43.078 [2024-12-16 14:37:35.081301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.081346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.093382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee27f0 00:21:43.078 [2024-12-16 14:37:35.094763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.094808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:43.078 18091.00 IOPS, 70.67 MiB/s [2024-12-16T14:37:35.278Z] [2024-12-16 14:37:35.107968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee1f80 00:21:43.078 [2024-12-16 14:37:35.109293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.109338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.121476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee1710 00:21:43.078 [2024-12-16 14:37:35.122836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.135071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee0ea0 00:21:43.078 [2024-12-16 14:37:35.136401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.136471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.148374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee0630 00:21:43.078 [2024-12-16 14:37:35.149656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.161612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edfdc0 00:21:43.078 [2024-12-16 14:37:35.162925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.162971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.174964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edf550 00:21:43.078 [2024-12-16 14:37:35.176251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.176295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.188438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edece0 00:21:43.078 [2024-12-16 14:37:35.189668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.189712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.202022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ede470 00:21:43.078 [2024-12-16 14:37:35.203337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.203381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.220783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eddc00 00:21:43.078 [2024-12-16 14:37:35.223118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.223165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.234267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ede470 00:21:43.078 [2024-12-16 14:37:35.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.236666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.247775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edece0 00:21:43.078 [2024-12-16 14:37:35.249964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.250009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:43.078 [2024-12-16 14:37:35.261097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edf550 00:21:43.078 [2024-12-16 14:37:35.263415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.078 [2024-12-16 14:37:35.263466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.275158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016edfdc0 00:21:43.338 [2024-12-16 14:37:35.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.277574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.289302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee0630 00:21:43.338 [2024-12-16 14:37:35.291617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.291660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.302706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee0ea0 00:21:43.338 [2024-12-16 14:37:35.304991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.305035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.316543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee1710 00:21:43.338 [2024-12-16 14:37:35.318804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.318849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.331497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee1f80 00:21:43.338 [2024-12-16 14:37:35.333956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.334003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.347018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee27f0 00:21:43.338 [2024-12-16 14:37:35.349281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.349326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.361770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee3060 00:21:43.338 [2024-12-16 14:37:35.363998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.364043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.375817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee38d0 00:21:43.338 [2024-12-16 14:37:35.377943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.377988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.390138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee4140 00:21:43.338 [2024-12-16 14:37:35.392340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.338 [2024-12-16 14:37:35.392384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:43.338 [2024-12-16 14:37:35.404426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee49b0 00:21:43.338 [2024-12-16 14:37:35.406515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.406561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.420307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee5220 00:21:43.339 [2024-12-16 14:37:35.422424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.422482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.434638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee5a90 00:21:43.339 [2024-12-16 14:37:35.436739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.436786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.448561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee6300 00:21:43.339 [2024-12-16 14:37:35.450585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.450617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.462654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee6b70 00:21:43.339 [2024-12-16 14:37:35.464678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.464723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.476819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee73e0 00:21:43.339 [2024-12-16 14:37:35.478837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.478903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.491047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee7c50 00:21:43.339 [2024-12-16 14:37:35.493092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.493138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.505328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee84c0 00:21:43.339 [2024-12-16 14:37:35.507380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.507424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.519981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee8d30 00:21:43.339 [2024-12-16 14:37:35.521969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.522013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:43.339 [2024-12-16 14:37:35.534176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee95a0 00:21:43.339 [2024-12-16 14:37:35.536411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.339 [2024-12-16 14:37:35.536466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.548532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ee9e10 00:21:43.598 [2024-12-16 14:37:35.550345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.550390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.561906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eea680 00:21:43.598 [2024-12-16 14:37:35.563852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.563912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.575382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeaef0 00:21:43.598 [2024-12-16 14:37:35.577175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.577219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.588937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeb760 00:21:43.598 [2024-12-16 14:37:35.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.602239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eebfd0 00:21:43.598 [2024-12-16 14:37:35.604153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.616463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eec840 00:21:43.598 [2024-12-16 14:37:35.618253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.618298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.629977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eed0b0 00:21:43.598 [2024-12-16 14:37:35.631797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.631842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.643433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eed920 00:21:43.598 [2024-12-16 14:37:35.645154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.598 [2024-12-16 14:37:35.645198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:43.598 [2024-12-16 14:37:35.656731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eee190 00:21:43.598 [2024-12-16 14:37:35.658419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.658468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.669973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eeea00 00:21:43.599 [2024-12-16 14:37:35.671827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.671872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.683893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eef270 00:21:43.599 [2024-12-16 14:37:35.685687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.685732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.697742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eefae0 00:21:43.599 [2024-12-16 14:37:35.699549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.699579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.711324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef0350 00:21:43.599 [2024-12-16 14:37:35.713020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.713064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.725084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef0bc0 00:21:43.599 [2024-12-16 14:37:35.726745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.738478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef1430 00:21:43.599 [2024-12-16 14:37:35.740193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.740236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.751849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef1ca0 00:21:43.599 [2024-12-16 14:37:35.753451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.753487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.765139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef2510 00:21:43.599 [2024-12-16 14:37:35.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.766797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.778504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef2d80 00:21:43.599 [2024-12-16 14:37:35.780104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.780148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:43.599 [2024-12-16 14:37:35.791986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef35f0 00:21:43.599 [2024-12-16 14:37:35.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.599 [2024-12-16 14:37:35.793820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:43.857 [2024-12-16 14:37:35.806686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef3e60 00:21:43.857 [2024-12-16 14:37:35.808330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.857 [2024-12-16 14:37:35.808376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:43.857 [2024-12-16 14:37:35.820204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef46d0 00:21:43.857 [2024-12-16 14:37:35.821838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.857 [2024-12-16 14:37:35.821883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:43.857 [2024-12-16 14:37:35.833831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef4f40 00:21:43.857 [2024-12-16 14:37:35.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.835474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.847192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef57b0 00:21:43.858 [2024-12-16 14:37:35.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.848836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.860580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef6020 00:21:43.858 [2024-12-16 14:37:35.862043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.862087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.874024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef6890 00:21:43.858 [2024-12-16 14:37:35.875550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.875580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.887397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7100 00:21:43.858 [2024-12-16 14:37:35.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.888925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.900810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef7970 00:21:43.858 [2024-12-16 14:37:35.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.902314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.914242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef81e0 00:21:43.858 [2024-12-16 14:37:35.915729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.915772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.927731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef8a50 00:21:43.858 [2024-12-16 14:37:35.929122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.929166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.941207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef92c0 00:21:43.858 [2024-12-16 14:37:35.942650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.954550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016ef9b30 00:21:43.858 [2024-12-16 14:37:35.955965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.956009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.968034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efa3a0 00:21:43.858 [2024-12-16 14:37:35.969431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.969500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.981637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efac10 00:21:43.858 [2024-12-16 14:37:35.983060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.983093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:35.995621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efb480 00:21:43.858 [2024-12-16 14:37:35.996964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:35.997010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:36.009661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efbcf0 00:21:43.858 [2024-12-16 14:37:36.011044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:36.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:36.023116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efc560 00:21:43.858 [2024-12-16 14:37:36.024475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:36.024511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:36.038687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efcdd0 00:21:43.858 [2024-12-16 14:37:36.040217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.858 [2024-12-16 14:37:36.040261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:43.858 [2024-12-16 14:37:36.055071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efd640 00:21:44.117 [2024-12-16 14:37:36.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.117 [2024-12-16 14:37:36.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.117 [2024-12-16 14:37:36.070227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efdeb0 00:21:44.117 [2024-12-16 14:37:36.071640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.117 [2024-12-16 14:37:36.071684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.117 [2024-12-16 14:37:36.083830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016efe720 00:21:44.117 [2024-12-16 14:37:36.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.117 [2024-12-16 14:37:36.085137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.117 [2024-12-16 14:37:36.097277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b76f0) with pdu=0x200016eff3c8 00:21:44.117 [2024-12-16 14:37:36.098504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.117 [2024-12-16 14:37:36.098574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:44.117 18217.00 IOPS, 71.16 MiB/s 00:21:44.117 Latency(us) 00:21:44.117 [2024-12-16T14:37:36.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.117 nvme0n1 : 2.01 18210.32 71.13 0.00 0.00 7023.08 3738.53 25499.46 00:21:44.117 [2024-12-16T14:37:36.317Z] =================================================================================================================== 00:21:44.117 [2024-12-16T14:37:36.317Z] Total : 18210.32 71.13 0.00 0.00 7023.08 3738.53 25499.46 00:21:44.117 { 00:21:44.117 "results": [ 00:21:44.117 { 00:21:44.117 "job": "nvme0n1", 00:21:44.117 "core_mask": "0x2", 00:21:44.117 "workload": "randwrite", 00:21:44.117 "status": "finished", 00:21:44.117 "queue_depth": 128, 00:21:44.117 "io_size": 4096, 00:21:44.117 "runtime": 2.007763, 00:21:44.117 "iops": 18210.316655900122, 00:21:44.117 "mibps": 71.13404943710985, 00:21:44.117 "io_failed": 0, 00:21:44.117 "io_timeout": 0, 00:21:44.117 "avg_latency_us": 7023.0835114450665, 00:21:44.117 "min_latency_us": 3738.530909090909, 00:21:44.117 "max_latency_us": 25499.46181818182 00:21:44.117 } 00:21:44.117 ], 00:21:44.117 "core_count": 1 00:21:44.117 } 00:21:44.117 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:44.118 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:44.118 | .driver_specific 00:21:44.118 | .nvme_error 00:21:44.118 | .status_code 00:21:44.118 | .command_transient_transport_error' 00:21:44.118 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:44.118 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96671 ']' 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.376 killing process with pid 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96671' 00:21:44.376 Received shutdown signal, test time was about 2.000000 seconds 00:21:44.376 00:21:44.376 Latency(us) 00:21:44.376 [2024-12-16T14:37:36.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.376 [2024-12-16T14:37:36.576Z] =================================================================================================================== 00:21:44.376 [2024-12-16T14:37:36.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96671 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=96717 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 96717 /var/tmp/bperf.sock 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 96717 ']' 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.376 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:44.635 Zero copy mechanism will not be used. 00:21:44.635 [2024-12-16 14:37:36.590860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:44.635 [2024-12-16 14:37:36.590961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96717 ] 00:21:44.635 [2024-12-16 14:37:36.727788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.635 [2024-12-16 14:37:36.746605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.635 [2024-12-16 14:37:36.773583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.635 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.635 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:44.635 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.635 14:37:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:45.202 nvme0n1 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:45.202 14:37:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:45.462 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:45.462 Zero copy mechanism will not be used. 00:21:45.462 Running I/O for 2 seconds... 00:21:45.462 [2024-12-16 14:37:37.494309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.494455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.494482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.499316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.499430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.499453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.504168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.504292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.504313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.508960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.509067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.509088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.513636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.513773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.518234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.518380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.518400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.523329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.523421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.523443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.528065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.528187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.528207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.532972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.533066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.533087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.537697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.537803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.537824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.542340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.542456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.542487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.546903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.547009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.547028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.551622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.551702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.551722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.556308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.556415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.556434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.560869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.561005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.565505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.565601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.565620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.570006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.570122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.570142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.574834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.574969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.574989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.579631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.579750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.579769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.584391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.584506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.584526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.589026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.589142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.589162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.462 [2024-12-16 14:37:37.593717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.462 [2024-12-16 14:37:37.593850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.462 [2024-12-16 14:37:37.593871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.598551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.598640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.598660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.603236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.603410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.607922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.608015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.608035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.612550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.612655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.612675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.617130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.617236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.617256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.621802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.621907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.621927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.626319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.626437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.626468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.631073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.631158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.631179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.635900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.635994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.636014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.640491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.640602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.640622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.645115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.645221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.645241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.649842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.649977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.463 [2024-12-16 14:37:37.654508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.463 [2024-12-16 14:37:37.654613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.463 [2024-12-16 14:37:37.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.722 [2024-12-16 14:37:37.659721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.722 [2024-12-16 14:37:37.659819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.722 [2024-12-16 14:37:37.659839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.722 [2024-12-16 14:37:37.665013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.722 [2024-12-16 14:37:37.665124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.665144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.669688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.669793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.669813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.674518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.674632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.674652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.679266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.679378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.684019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.684162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.684182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.688842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.688935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.688955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.693434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.693552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.693572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.698176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.698295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.698318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.702747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.702841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.702861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.707360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.707464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.707485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.711694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.711933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.712007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.716268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.716362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.716390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.721008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.721107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.721129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.725710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.725801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.725821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.730329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.730436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.730458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.735491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.735599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.735620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.740522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.745609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.745719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.745740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.750871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.750993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.751015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.756209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.756324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.761487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.761601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.761630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.766453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.766571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.766591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.771498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.771605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.771626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.776496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.776597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.776617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.781449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.781542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.786197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.786291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.786326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.791036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.791154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.723 [2024-12-16 14:37:37.795970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.723 [2024-12-16 14:37:37.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.723 [2024-12-16 14:37:37.796087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.800983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.801085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.801106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.805826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.805918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.810490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.810582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.810602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.815219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.815341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.815361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.820107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.820198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.820217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.824913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.825013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.825033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.829678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.829781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.829801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.834470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.834597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.839460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.839573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.839593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.844245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.844345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.844365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.849071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.849164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.849184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.853975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.854082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.858786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.858868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.858914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.863612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.863703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.863723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.868273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.868362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.873475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.873571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.873592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.878426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.878524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.878544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.883402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.883534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.883555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.888281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.888385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.888405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.893299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.893402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.893422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.898297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.898391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.898411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.903236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.903345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.903365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.908134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.908230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.908250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.912962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.913056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.913076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.724 [2024-12-16 14:37:37.917918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.724 [2024-12-16 14:37:37.918043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.724 [2024-12-16 14:37:37.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.923194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.923325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.923345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.928486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.928587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.928607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.933205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.933306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.933325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.937968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.938062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.938082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.943052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.943138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.943160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.948091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.948185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.948205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.952942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.953035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.953055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.957581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.957672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.957693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.962153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.962276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.966864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.966990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.967011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.971616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.971707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.971728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.976194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.976297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.976316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.981220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.981314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.981334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.985906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.985997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.986017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.990530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.990620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.990639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.995167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:37.995262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:37.995297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:37.999971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.984 [2024-12-16 14:37:38.000064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.984 [2024-12-16 14:37:38.000084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.984 [2024-12-16 14:37:38.004632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.004733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.004752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.009229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.009321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.009343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.013872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.013976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.018449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.018540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.018560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.022974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.023055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.023075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.027743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.027833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.027853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.032295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.032384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.032404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.037096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.037187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.037208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.041750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.041839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.041859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.046276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.046369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.050851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.050964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.050985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.055656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.055748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.055767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.060234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.060328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.060347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.064814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.064904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.064923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.069445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.069536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.069556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.074599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.074667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.074687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.079977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.080084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.080105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.085268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.085360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.085381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.091010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.091100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.091123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.096252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.096373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.101403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.101524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.101590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.106527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.106620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.106640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.111460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.111561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.111581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.116031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.116142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.120772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.120866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.120887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.125361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.125473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.125493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.129997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.130088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.130107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.134685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.985 [2024-12-16 14:37:38.134794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.985 [2024-12-16 14:37:38.134814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.985 [2024-12-16 14:37:38.139320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.139411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.139431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.143983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.144076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.144096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.148636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.148727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.148747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.153114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.153213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.153232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.157807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.157900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.157920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.162350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.162442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.162473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.166932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.167011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.167030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.171746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.986 [2024-12-16 14:37:38.176377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:45.986 [2024-12-16 14:37:38.176503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.986 [2024-12-16 14:37:38.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.181696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.181794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.181814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.186604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.186711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.186763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.191654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.191745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.191765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.196146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.196250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.196270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.200886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.200976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.200997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.205527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.205619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.205639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.210025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.210129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.210149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.214681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.214773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.214792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.219236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.219344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.219365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.223907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.224001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.228569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.228671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.228691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.233142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.233232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.233251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.237801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.237895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.237915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.242392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.242496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.242524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.247001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.247098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.247124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.251861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.251953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.256420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.256524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.256544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.261007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.261116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.261135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.265756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.265848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.265868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.270339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.270431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.274944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.275025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.275045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.246 [2024-12-16 14:37:38.279693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.246 [2024-12-16 14:37:38.279785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.246 [2024-12-16 14:37:38.279806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.284222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.284312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.284332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.288866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.288957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.288977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.293380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.293486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.293506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.297985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.298076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.298096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.302642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.302735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.302754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.307195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.307333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.307352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.311923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.312012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.312032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.316687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.316780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.316800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.321399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.321510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.321530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.326081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.326174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.326194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.330687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.330765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.330784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.335435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.335573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.340081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.340174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.340193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.344647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.344742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.344762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.349254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.349367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.353906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.353999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.354018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.358540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.358637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.358656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.363057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.363137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.363157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.367737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.367831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.367851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.372274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.372367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.372387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.377001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.377096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.377116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.381617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.381709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.381728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.386142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.386236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.386255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.390816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.390933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.390953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.395430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.395534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.395554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.399986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.400077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.400097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.404605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.404718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.247 [2024-12-16 14:37:38.409150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.247 [2024-12-16 14:37:38.409244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.247 [2024-12-16 14:37:38.409264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.413782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.413875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.413895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.418278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.418370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.418389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.422934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.423021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.423041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.427806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.427900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.427921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.432521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.432613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.432633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.437052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.437147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.437166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.248 [2024-12-16 14:37:38.442212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.248 [2024-12-16 14:37:38.442326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.248 [2024-12-16 14:37:38.442346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.447399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.447504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.447524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.452359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.452449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.452481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.456991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.457082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.457102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.461565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.461658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.461677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.466062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.466155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.466175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.470692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.470783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.470804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.475312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.475404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.475423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.479923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.480016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.480036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.484579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.484671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.484690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.508 6470.00 IOPS, 808.75 MiB/s [2024-12-16T14:37:38.708Z] [2024-12-16 14:37:38.489827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.489904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.489923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.494380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.494507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.499206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.499309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.499329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.503934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.504029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.504049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.508489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.508582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.508602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.513055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.513143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.513162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.517700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.517792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.508 [2024-12-16 14:37:38.517812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.508 [2024-12-16 14:37:38.522296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.508 [2024-12-16 14:37:38.522387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.522406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.526927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.527005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.527025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.531631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.531727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.536242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.536337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.536356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.540810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.540899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.540918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.545346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.545451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.545483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.549998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.550089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.550108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.554597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.554676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.554696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.559344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.559460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.563825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.563914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.563933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.568401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.568502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.568522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.572940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.573032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.573051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.577544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.577637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.582101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.582194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.582214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.586668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.586764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.591481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.591596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.591616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.596096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.596185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.596205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.600906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.600999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.601019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.605576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.605671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.605691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.610132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.610224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.610243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.614679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.614769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.614789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.619282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.619376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.619395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.623969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.624062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.624081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.628639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.628733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.628752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.633232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.633309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.637984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.638079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.638100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.642629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.642722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.642741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.647120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.647198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.509 [2024-12-16 14:37:38.647217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.509 [2024-12-16 14:37:38.652030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.509 [2024-12-16 14:37:38.652118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.652139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.656654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.656747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.661201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.661291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.661311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.665870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.665962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.665981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.670476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.670577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.670597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.675061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.675143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.675163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.679847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.679940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.679960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.684546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.684651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.689116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.689207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.689227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.693775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.693867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.693887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.698318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.698409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.698429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.510 [2024-12-16 14:37:38.703374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.510 [2024-12-16 14:37:38.703490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.510 [2024-12-16 14:37:38.703526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.708494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.708570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.708590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.713513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.713605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.713625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.717994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.718088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.718107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.722642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.722734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.722753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.727306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.727397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.727416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.732002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.732096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.732115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.736649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.736727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.736748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.741305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.741419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.745959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.746072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.750551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.750643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.750663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.755022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.755089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.755109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.759666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.759788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.764185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.764276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.764295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.768841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.768934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.768953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.773449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.773556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.773583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.778129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.778249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.778277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.782741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.782847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.782870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.787587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.787694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.771 [2024-12-16 14:37:38.787715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.771 [2024-12-16 14:37:38.792374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.771 [2024-12-16 14:37:38.792484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.792505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.797008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.797102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.797122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.801760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.801851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.801871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.806247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.806348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.806367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.810871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.810994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.811014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.815628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.815730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.820108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.820210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.824866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.824954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.824974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.829536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.829631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.829651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.834174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.834268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.834289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.838795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.838915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.838936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.843487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.843595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.843615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.848178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.848270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.848291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.852848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.852939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.852959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.857423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.857524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.857544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.861931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.862050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.862069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.866567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.866661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.866681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.871194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.871311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.871331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.875981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.876075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.876095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.880574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.880663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.880682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.885066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.885168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.885188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.889825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.889919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.889938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.894381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.894522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.894544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.899225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.899331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.899351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.903954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.904049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.904069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.908464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.908570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.908590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.913134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.913225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.913245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.917811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.917903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.772 [2024-12-16 14:37:38.917923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.772 [2024-12-16 14:37:38.922443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.772 [2024-12-16 14:37:38.922566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.922585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.927242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.927346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.927366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.931977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.932072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.932092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.936672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.936768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.936787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.941187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.941281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.941301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.946189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.946284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.946304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.951146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.951242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.951277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.956385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.956489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.956511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.961602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.961685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.961707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.773 [2024-12-16 14:37:38.967184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:46.773 [2024-12-16 14:37:38.967310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.773 [2024-12-16 14:37:38.967331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.972708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.972805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.972873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.978025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.978119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.978139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.983123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.983242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.988388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.988524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.988556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.993352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.993444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.993477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:38.998095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:38.998191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:38.998212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.003063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.003148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.003170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.007907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.007997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.008017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.012636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.012751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.020741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.020852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.020874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.026446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.026547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.032590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.032676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.032701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.038310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.038436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.043598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.043690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.043727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.048713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.048786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.048806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.053328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.053406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.053426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.058024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.058096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.058116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.062650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.062728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.062748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.067436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.033 [2024-12-16 14:37:39.067546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.033 [2024-12-16 14:37:39.067566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.033 [2024-12-16 14:37:39.071996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.072081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.072100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.076669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.076743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.076762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.081264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.081339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.081358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.085963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.086036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.086055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.090770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.090841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.090861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.095796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.095883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.095904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.101018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.101103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.101123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.106350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.106421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.106457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.112118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.112216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.112238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.117488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.117595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.117618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.122873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.122982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.123005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.128249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.128323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.128342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.133534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.133622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.133642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.138522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.138597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.138618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.143260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.143349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.143369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.147982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.148052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.148071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.152805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.152879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.152898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.157443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.157519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.162075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.162149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.162168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.166814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.166947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.166969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.171551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.171655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.176565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.176653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.176673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.181295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.181384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.186097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.186167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.186186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.190614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.190697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.190716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.195109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.195185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.195221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.199716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.199795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.199814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.034 [2024-12-16 14:37:39.204256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.034 [2024-12-16 14:37:39.204343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.034 [2024-12-16 14:37:39.204362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.035 [2024-12-16 14:37:39.209001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.035 [2024-12-16 14:37:39.209073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.035 [2024-12-16 14:37:39.209093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.035 [2024-12-16 14:37:39.213684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.035 [2024-12-16 14:37:39.213780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.035 [2024-12-16 14:37:39.213801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.035 [2024-12-16 14:37:39.218269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.035 [2024-12-16 14:37:39.218341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.035 [2024-12-16 14:37:39.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.035 [2024-12-16 14:37:39.222821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.035 [2024-12-16 14:37:39.222936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.035 [2024-12-16 14:37:39.222955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.035 [2024-12-16 14:37:39.227832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.035 [2024-12-16 14:37:39.227962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.035 [2024-12-16 14:37:39.227982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.294 [2024-12-16 14:37:39.232937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.294 [2024-12-16 14:37:39.233008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.294 [2024-12-16 14:37:39.233027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.294 [2024-12-16 14:37:39.237800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.294 [2024-12-16 14:37:39.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.294 [2024-12-16 14:37:39.237889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.242395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.242496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.242517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.246983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.247042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.247063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.251590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.251670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.251690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.256238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.256313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.260844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.265538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.265606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.265626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.270142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.270212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.270232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.274733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.274809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.274829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.279305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.279376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.279396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.283820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.283916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.283935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.288415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.288518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.288537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.293016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.293088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.293115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.297481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.297593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.297620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.301823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.301916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.301937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.306449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.306534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.306554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.310966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.311036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.311056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.315594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.315679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.315698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.320142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.320235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.320255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.324799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.324884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.324904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.329398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.329495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.329515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.333936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.334027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.334047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.338538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.338621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.343059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.343141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.343161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.347732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.347812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.347831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.352267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.352348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.352367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.356881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.356991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.361597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.361678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.295 [2024-12-16 14:37:39.361697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.295 [2024-12-16 14:37:39.366170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.295 [2024-12-16 14:37:39.366252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.366272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.370763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.370864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.375361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.375467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.375487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.380051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.380141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.380160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.384634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.384752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.389295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.389396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.393858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.393936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.393955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.398505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.398589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.398608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.402997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.403093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.407667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.407754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.407773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.412172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.412250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.412268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.416754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.416836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.416855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.421241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.421332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.421351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.425832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.425921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.425941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.430369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.430461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.430480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.434901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.435020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.439631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.439710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.444170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.444253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.444273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.448820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.448903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.448923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.453328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.453426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.458050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.458131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.458150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.462626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.462710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.462729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.467330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.467414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.467433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.471918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.471991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.472011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.476503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.476588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.476607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.481026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.481109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.481128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.296 [2024-12-16 14:37:39.485625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.485717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.485736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.296 6502.50 IOPS, 812.81 MiB/s [2024-12-16T14:37:39.496Z] [2024-12-16 14:37:39.491369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b7a30) with pdu=0x200016eff3c8 00:21:47.296 [2024-12-16 14:37:39.491476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.296 [2024-12-16 14:37:39.491514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.555 00:21:47.555 Latency(us) 00:21:47.555 [2024-12-16T14:37:39.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.555 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:47.555 nvme0n1 : 2.00 6499.45 812.43 0.00 0.00 2456.04 1720.32 8817.57 00:21:47.555 [2024-12-16T14:37:39.755Z] =================================================================================================================== 00:21:47.555 [2024-12-16T14:37:39.755Z] Total : 6499.45 812.43 0.00 0.00 2456.04 1720.32 8817.57 00:21:47.555 { 00:21:47.555 "results": [ 00:21:47.555 { 00:21:47.555 "job": "nvme0n1", 00:21:47.555 "core_mask": "0x2", 00:21:47.555 "workload": "randwrite", 00:21:47.555 "status": "finished", 00:21:47.555 "queue_depth": 16, 00:21:47.555 "io_size": 131072, 00:21:47.555 "runtime": 2.004476, 00:21:47.555 "iops": 6499.45422145239, 00:21:47.555 "mibps": 812.4317776815487, 00:21:47.555 "io_failed": 0, 00:21:47.555 "io_timeout": 0, 00:21:47.555 "avg_latency_us": 2456.0408630362576, 00:21:47.555 "min_latency_us": 1720.32, 00:21:47.555 "max_latency_us": 8817.57090909091 00:21:47.555 } 00:21:47.555 ], 00:21:47.555 "core_count": 1 00:21:47.555 } 00:21:47.555 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:47.555 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:47.555 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:47.555 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:47.555 | .driver_specific 00:21:47.555 | .nvme_error 00:21:47.555 | .status_code 00:21:47.555 | .command_transient_transport_error' 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 96717 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96717 ']' 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96717 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:47.813 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96717 00:21:47.814 killing process with pid 96717 00:21:47.814 Received shutdown signal, test time was about 2.000000 seconds 00:21:47.814 00:21:47.814 Latency(us) 00:21:47.814 [2024-12-16T14:37:40.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.814 [2024-12-16T14:37:40.014Z] =================================================================================================================== 00:21:47.814 [2024-12-16T14:37:40.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96717' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96717 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96717 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 96545 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 96545 ']' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 96545 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96545 00:21:47.814 killing process with pid 96545 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96545' 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 96545 00:21:47.814 14:37:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 96545 00:21:48.072 00:21:48.072 real 0m14.341s 00:21:48.072 user 0m27.890s 00:21:48.072 sys 0m4.255s 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.072 ************************************ 00:21:48.072 END TEST nvmf_digest_error 00:21:48.072 ************************************ 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.072 rmmod nvme_tcp 00:21:48.072 rmmod nvme_fabrics 00:21:48.072 rmmod nvme_keyring 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 96545 ']' 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 96545 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 96545 ']' 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 96545 00:21:48.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96545) - No such process 00:21:48.072 Process with pid 96545 is not found 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 96545 is not found' 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:48.072 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:48.331 00:21:48.331 real 0m30.545s 00:21:48.331 user 0m57.821s 00:21:48.331 sys 0m8.975s 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.331 ************************************ 00:21:48.331 END TEST nvmf_digest 00:21:48.331 ************************************ 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.331 ************************************ 00:21:48.331 START TEST nvmf_host_multipath 00:21:48.331 ************************************ 00:21:48.331 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:48.590 * Looking for test storage... 00:21:48.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:48.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.590 --rc genhtml_branch_coverage=1 00:21:48.590 --rc genhtml_function_coverage=1 00:21:48.590 --rc genhtml_legend=1 00:21:48.590 --rc geninfo_all_blocks=1 00:21:48.590 --rc geninfo_unexecuted_blocks=1 00:21:48.590 00:21:48.590 ' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:48.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.590 --rc genhtml_branch_coverage=1 00:21:48.590 --rc genhtml_function_coverage=1 00:21:48.590 --rc genhtml_legend=1 00:21:48.590 --rc geninfo_all_blocks=1 00:21:48.590 --rc geninfo_unexecuted_blocks=1 00:21:48.590 00:21:48.590 ' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:48.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.590 --rc genhtml_branch_coverage=1 00:21:48.590 --rc genhtml_function_coverage=1 00:21:48.590 --rc genhtml_legend=1 00:21:48.590 --rc geninfo_all_blocks=1 00:21:48.590 --rc geninfo_unexecuted_blocks=1 00:21:48.590 00:21:48.590 ' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:48.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.590 --rc genhtml_branch_coverage=1 00:21:48.590 --rc genhtml_function_coverage=1 00:21:48.590 --rc genhtml_legend=1 00:21:48.590 --rc geninfo_all_blocks=1 00:21:48.590 --rc geninfo_unexecuted_blocks=1 00:21:48.590 00:21:48.590 ' 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.590 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:48.591 Cannot find device "nvmf_init_br" 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:48.591 Cannot find device "nvmf_init_br2" 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:48.591 Cannot find device "nvmf_tgt_br" 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:48.591 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.850 Cannot find device "nvmf_tgt_br2" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:48.850 Cannot find device "nvmf_init_br" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:48.850 Cannot find device "nvmf_init_br2" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:48.850 Cannot find device "nvmf_tgt_br" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:48.850 Cannot find device "nvmf_tgt_br2" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:48.850 Cannot find device "nvmf_br" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:48.850 Cannot find device "nvmf_init_if" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:48.850 Cannot find device "nvmf_init_if2" 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:48.850 14:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.850 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:49.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:49.109 00:21:49.109 --- 10.0.0.3 ping statistics --- 00:21:49.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.109 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:49.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:49.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:21:49.109 00:21:49.109 --- 10.0.0.4 ping statistics --- 00:21:49.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.109 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:49.109 00:21:49.109 --- 10.0.0.1 ping statistics --- 00:21:49.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.109 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:49.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:49.109 00:21:49.109 --- 10.0.0.2 ping statistics --- 00:21:49.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.109 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=97017 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 97017 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97017 ']' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.109 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:49.109 [2024-12-16 14:37:41.197268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:21:49.109 [2024-12-16 14:37:41.197360] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.367 [2024-12-16 14:37:41.343003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:49.367 [2024-12-16 14:37:41.361734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.367 [2024-12-16 14:37:41.361806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.367 [2024-12-16 14:37:41.361830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.367 [2024-12-16 14:37:41.361837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.367 [2024-12-16 14:37:41.361843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.367 [2024-12-16 14:37:41.362663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.367 [2024-12-16 14:37:41.362671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.367 [2024-12-16 14:37:41.390568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.367 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.367 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=97017 00:21:49.368 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.625 [2024-12-16 14:37:41.794947] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.625 14:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:50.191 Malloc0 00:21:50.191 14:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:50.448 14:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.448 14:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:50.706 [2024-12-16 14:37:42.856903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:50.706 14:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:50.965 [2024-12-16 14:37:43.084966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=97064 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 97064 /var/tmp/bdevperf.sock 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 97064 ']' 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.965 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:51.223 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.223 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:51.223 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:51.481 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.047 Nvme0n1 00:21:52.047 14:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.304 Nvme0n1 00:21:52.304 14:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:52.305 14:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:53.238 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:53.238 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:53.495 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:53.753 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:53.753 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97102 00:21:53.753 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:53.753 14:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:00.344 14:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:00.344 14:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:00.344 Attaching 4 probes... 00:22:00.344 @path[10.0.0.3, 4421]: 15273 00:22:00.344 @path[10.0.0.3, 4421]: 15632 00:22:00.344 @path[10.0.0.3, 4421]: 15616 00:22:00.344 @path[10.0.0.3, 4421]: 15616 00:22:00.344 @path[10.0.0.3, 4421]: 15616 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97102 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:00.344 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:00.603 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:00.603 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97221 00:22:00.603 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:00.603 14:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.163 Attaching 4 probes... 00:22:07.163 @path[10.0.0.3, 4420]: 20396 00:22:07.163 @path[10.0.0.3, 4420]: 20676 00:22:07.163 @path[10.0.0.3, 4420]: 20627 00:22:07.163 @path[10.0.0.3, 4420]: 20885 00:22:07.163 @path[10.0.0.3, 4420]: 20720 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:07.163 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.164 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.164 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97221 00:22:07.164 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.164 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:07.164 14:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:07.164 14:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:07.422 14:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:07.422 14:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:07.422 14:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97328 00:22:07.422 14:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.983 Attaching 4 probes... 00:22:13.983 @path[10.0.0.3, 4421]: 15593 00:22:13.983 @path[10.0.0.3, 4421]: 20738 00:22:13.983 @path[10.0.0.3, 4421]: 20674 00:22:13.983 @path[10.0.0.3, 4421]: 20705 00:22:13.983 @path[10.0.0.3, 4421]: 20767 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97328 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:13.983 14:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:13.983 14:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:14.241 14:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:14.241 14:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97446 00:22:14.241 14:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:14.241 14:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.876 Attaching 4 probes... 00:22:20.876 00:22:20.876 00:22:20.876 00:22:20.876 00:22:20.876 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97446 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:20.876 14:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:21.134 14:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:21.134 14:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:21.134 14:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97564 00:22:21.134 14:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:27.711 Attaching 4 probes... 00:22:27.711 @path[10.0.0.3, 4421]: 20063 00:22:27.711 @path[10.0.0.3, 4421]: 20290 00:22:27.711 @path[10.0.0.3, 4421]: 20469 00:22:27.711 @path[10.0.0.3, 4421]: 20497 00:22:27.711 @path[10.0.0.3, 4421]: 20497 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97564 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:27.711 14:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:28.646 14:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:28.646 14:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97684 00:22:28.646 14:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:28.646 14:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:35.208 Attaching 4 probes... 00:22:35.208 @path[10.0.0.3, 4420]: 19678 00:22:35.208 @path[10.0.0.3, 4420]: 20169 00:22:35.208 @path[10.0.0.3, 4420]: 20164 00:22:35.208 @path[10.0.0.3, 4420]: 20106 00:22:35.208 @path[10.0.0.3, 4420]: 20389 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97684 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:35.208 14:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:35.208 [2024-12-16 14:38:27.269196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:35.208 14:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:35.466 14:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:42.027 14:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:42.027 14:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97865 00:22:42.027 14:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97017 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:42.027 14:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:48.598 Attaching 4 probes... 00:22:48.598 @path[10.0.0.3, 4421]: 19883 00:22:48.598 @path[10.0.0.3, 4421]: 20207 00:22:48.598 @path[10.0.0.3, 4421]: 20213 00:22:48.598 @path[10.0.0.3, 4421]: 20127 00:22:48.598 @path[10.0.0.3, 4421]: 20179 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:48.598 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97865 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 97064 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97064 ']' 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97064 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97064 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97064' 00:22:48.599 killing process with pid 97064 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97064 00:22:48.599 14:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97064 00:22:48.599 { 00:22:48.599 "results": [ 00:22:48.599 { 00:22:48.599 "job": "Nvme0n1", 00:22:48.599 "core_mask": "0x4", 00:22:48.599 "workload": "verify", 00:22:48.599 "status": "terminated", 00:22:48.599 "verify_range": { 00:22:48.599 "start": 0, 00:22:48.599 "length": 16384 00:22:48.599 }, 00:22:48.599 "queue_depth": 128, 00:22:48.599 "io_size": 4096, 00:22:48.599 "runtime": 55.497819, 00:22:48.599 "iops": 8334.723928520507, 00:22:48.599 "mibps": 32.55751534578323, 00:22:48.599 "io_failed": 0, 00:22:48.599 "io_timeout": 0, 00:22:48.599 "avg_latency_us": 15327.225432745778, 00:22:48.599 "min_latency_us": 444.9745454545455, 00:22:48.599 "max_latency_us": 7046430.72 00:22:48.599 } 00:22:48.599 ], 00:22:48.599 "core_count": 1 00:22:48.599 } 00:22:48.599 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 97064 00:22:48.599 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:48.599 [2024-12-16 14:37:43.146839] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:48.599 [2024-12-16 14:37:43.146949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97064 ] 00:22:48.599 [2024-12-16 14:37:43.285592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.599 [2024-12-16 14:37:43.304685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.599 [2024-12-16 14:37:43.332287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:48.599 Running I/O for 90 seconds... 00:22:48.599 7844.00 IOPS, 30.64 MiB/s [2024-12-16T14:38:40.799Z] 7775.00 IOPS, 30.37 MiB/s [2024-12-16T14:38:40.799Z] 7748.67 IOPS, 30.27 MiB/s [2024-12-16T14:38:40.799Z] 7791.50 IOPS, 30.44 MiB/s [2024-12-16T14:38:40.799Z] 7795.00 IOPS, 30.45 MiB/s [2024-12-16T14:38:40.799Z] 7797.17 IOPS, 30.46 MiB/s [2024-12-16T14:38:40.799Z] 7798.71 IOPS, 30.46 MiB/s [2024-12-16T14:38:40.799Z] 7784.75 IOPS, 30.41 MiB/s [2024-12-16T14:38:40.799Z] [2024-12-16 14:37:52.639789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.639844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.639907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.639926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.639947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.639961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.639980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.639994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.599 [2024-12-16 14:37:52.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.599 [2024-12-16 14:37:52.640800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:48.599 [2024-12-16 14:37:52.640820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.640835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.640868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.640883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.640902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.640916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.640935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.640949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.640968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.640983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.641048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.641943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.641980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.641999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.600 [2024-12-16 14:37:52.642251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.600 [2024-12-16 14:37:52.642285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:48.600 [2024-12-16 14:37:52.642304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.642825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.642858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.642917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.642957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.642979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.643016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.643052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.643087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.643123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.601 [2024-12-16 14:37:52.643158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.601 [2024-12-16 14:37:52.643756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:48.601 [2024-12-16 14:37:52.643775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.643968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.643988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.644003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:52.645302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.645963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.645985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.646000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.646020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.646035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:52.646058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:52.646073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:48.602 8011.89 IOPS, 31.30 MiB/s [2024-12-16T14:38:40.802Z] 8249.10 IOPS, 32.22 MiB/s [2024-12-16T14:38:40.802Z] 8437.36 IOPS, 32.96 MiB/s [2024-12-16T14:38:40.802Z] 8593.25 IOPS, 33.57 MiB/s [2024-12-16T14:38:40.802Z] 8729.77 IOPS, 34.10 MiB/s [2024-12-16T14:38:40.802Z] 8851.07 IOPS, 34.57 MiB/s [2024-12-16T14:38:40.802Z] [2024-12-16 14:37:59.203404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.203962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.203985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.204024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.204057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.602 [2024-12-16 14:37:59.204089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:59.204121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:59.204154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.602 [2024-12-16 14:37:59.204187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:48.602 [2024-12-16 14:37:59.204230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.204665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.204969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.204988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.205002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.205034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.205073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.205108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.603 [2024-12-16 14:37:59.205140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.603 [2024-12-16 14:37:59.205339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:48.603 [2024-12-16 14:37:59.205359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.205373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.205405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.604 [2024-12-16 14:37:59.205929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.205971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.205990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.604 [2024-12-16 14:37:59.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:48.604 [2024-12-16 14:37:59.206822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.206855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.206877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.206891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.206960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.206976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.206998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.207826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.207970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.207990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.208004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.208038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.208072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.208106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.605 [2024-12-16 14:37:59.208139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:48.605 [2024-12-16 14:37:59.208406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.605 [2024-12-16 14:37:59.208420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:48.605 8771.93 IOPS, 34.27 MiB/s [2024-12-16T14:38:40.805Z] 8379.00 IOPS, 32.73 MiB/s [2024-12-16T14:38:40.805Z] 8490.12 IOPS, 33.16 MiB/s [2024-12-16T14:38:40.805Z] 8590.67 IOPS, 33.56 MiB/s [2024-12-16T14:38:40.806Z] 8684.63 IOPS, 33.92 MiB/s [2024-12-16T14:38:40.806Z] 8772.40 IOPS, 34.27 MiB/s [2024-12-16T14:38:40.806Z] 8851.05 IOPS, 34.57 MiB/s [2024-12-16T14:38:40.806Z] [2024-12-16 14:38:06.294216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.294979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.294994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.606 [2024-12-16 14:38:06.295663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.606 [2024-12-16 14:38:06.295762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:48.606 [2024-12-16 14:38:06.295782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.295820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.295869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.295902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.295933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.295983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.295998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.296371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-12-16 14:38:06.296977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.296997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.297010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.297030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.607 [2024-12-16 14:38:06.297044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:48.607 [2024-12-16 14:38:06.297064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.297566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.297969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.297983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.298185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.298415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.298429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.299141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-12-16 14:38:06.299168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.299201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.299218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.299259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-12-16 14:38:06.299275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.608 [2024-12-16 14:38:06.299314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:06.299885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:06.299900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:48.609 8832.36 IOPS, 34.50 MiB/s [2024-12-16T14:38:40.809Z] 8448.35 IOPS, 33.00 MiB/s [2024-12-16T14:38:40.809Z] 8096.33 IOPS, 31.63 MiB/s [2024-12-16T14:38:40.809Z] 7772.48 IOPS, 30.36 MiB/s [2024-12-16T14:38:40.809Z] 7473.54 IOPS, 29.19 MiB/s [2024-12-16T14:38:40.809Z] 7196.74 IOPS, 28.11 MiB/s [2024-12-16T14:38:40.809Z] 6939.71 IOPS, 27.11 MiB/s [2024-12-16T14:38:40.809Z] 6749.45 IOPS, 26.37 MiB/s [2024-12-16T14:38:40.809Z] 6859.67 IOPS, 26.80 MiB/s [2024-12-16T14:38:40.809Z] 6967.16 IOPS, 27.22 MiB/s [2024-12-16T14:38:40.809Z] 7069.44 IOPS, 27.61 MiB/s [2024-12-16T14:38:40.809Z] 7164.79 IOPS, 27.99 MiB/s [2024-12-16T14:38:40.809Z] 7255.00 IOPS, 28.34 MiB/s [2024-12-16T14:38:40.809Z] 7336.17 IOPS, 28.66 MiB/s [2024-12-16T14:38:40.809Z] [2024-12-16 14:38:19.664099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-12-16 14:38:19.664382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-12-16 14:38:19.664820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-12-16 14:38:19.664834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.664846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.664860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.664872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.664901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.664914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.664933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.664947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.664960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.664973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.665977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.665992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.666022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.666052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.666082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.666111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-12-16 14:38:19.666156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.610 [2024-12-16 14:38:19.666207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-12-16 14:38:19.666223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.666950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.666973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-12-16 14:38:19.667360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-12-16 14:38:19.667593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.611 [2024-12-16 14:38:19.667629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.612 [2024-12-16 14:38:19.667905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.667933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.667960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.667988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.612 [2024-12-16 14:38:19.668348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.612 [2024-12-16 14:38:19.668422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.612 [2024-12-16 14:38:19.668433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106088 len:8 PRP1 0x0 PRP2 0x0 00:22:48.612 [2024-12-16 14:38:19.668469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.612 [2024-12-16 14:38:19.668605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.612 [2024-12-16 14:38:19.668631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.612 [2024-12-16 14:38:19.668655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.612 [2024-12-16 14:38:19.668683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.612 [2024-12-16 14:38:19.668694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e340 is same with the state(6) to be set 00:22:48.612 [2024-12-16 14:38:19.669738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:48.612 [2024-12-16 14:38:19.669774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2e340 (9): Bad file descriptor 00:22:48.612 [2024-12-16 14:38:19.670102] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.612 [2024-12-16 14:38:19.670132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2e340 with addr=10.0.0.3, port=4421 00:22:48.612 [2024-12-16 14:38:19.670148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e340 is same with the state(6) to be set 00:22:48.612 [2024-12-16 14:38:19.670298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2e340 (9): Bad file descriptor 00:22:48.612 [2024-12-16 14:38:19.670352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:48.612 [2024-12-16 14:38:19.670373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:48.612 [2024-12-16 14:38:19.670386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:48.612 [2024-12-16 14:38:19.670398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:48.612 [2024-12-16 14:38:19.670411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:48.612 7408.42 IOPS, 28.94 MiB/s [2024-12-16T14:38:40.812Z] 7470.24 IOPS, 29.18 MiB/s [2024-12-16T14:38:40.812Z] 7541.03 IOPS, 29.46 MiB/s [2024-12-16T14:38:40.812Z] 7606.13 IOPS, 29.71 MiB/s [2024-12-16T14:38:40.812Z] 7668.38 IOPS, 29.95 MiB/s [2024-12-16T14:38:40.812Z] 7726.80 IOPS, 30.18 MiB/s [2024-12-16T14:38:40.812Z] 7783.60 IOPS, 30.40 MiB/s [2024-12-16T14:38:40.812Z] 7832.35 IOPS, 30.60 MiB/s [2024-12-16T14:38:40.812Z] 7883.25 IOPS, 30.79 MiB/s [2024-12-16T14:38:40.812Z] 7932.42 IOPS, 30.99 MiB/s [2024-12-16T14:38:40.812Z] [2024-12-16 14:38:29.719882] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:48.612 7977.20 IOPS, 31.16 MiB/s [2024-12-16T14:38:40.812Z] 8023.74 IOPS, 31.34 MiB/s [2024-12-16T14:38:40.812Z] 8069.15 IOPS, 31.52 MiB/s [2024-12-16T14:38:40.812Z] 8115.37 IOPS, 31.70 MiB/s [2024-12-16T14:38:40.812Z] 8148.30 IOPS, 31.83 MiB/s [2024-12-16T14:38:40.812Z] 8186.65 IOPS, 31.98 MiB/s [2024-12-16T14:38:40.812Z] 8222.81 IOPS, 32.12 MiB/s [2024-12-16T14:38:40.812Z] 8256.89 IOPS, 32.25 MiB/s [2024-12-16T14:38:40.812Z] 8291.39 IOPS, 32.39 MiB/s [2024-12-16T14:38:40.812Z] 8326.67 IOPS, 32.53 MiB/s [2024-12-16T14:38:40.813Z] Received shutdown signal, test time was about 55.498527 seconds 00:22:48.613 00:22:48.613 Latency(us) 00:22:48.613 [2024-12-16T14:38:40.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.613 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.613 Verification LBA range: start 0x0 length 0x4000 00:22:48.613 Nvme0n1 : 55.50 8334.72 32.56 0.00 0.00 15327.23 444.97 7046430.72 00:22:48.613 [2024-12-16T14:38:40.813Z] =================================================================================================================== 00:22:48.613 [2024-12-16T14:38:40.813Z] Total : 8334.72 32.56 0.00 0.00 15327.23 444.97 7046430.72 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.613 rmmod nvme_tcp 00:22:48.613 rmmod nvme_fabrics 00:22:48.613 rmmod nvme_keyring 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 97017 ']' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 97017 ']' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.613 killing process with pid 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97017' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 97017 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.613 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.872 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:48.872 00:22:48.872 real 1m0.286s 00:22:48.872 user 2m47.889s 00:22:48.872 sys 0m17.419s 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:48.873 ************************************ 00:22:48.873 END TEST nvmf_host_multipath 00:22:48.873 ************************************ 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.873 ************************************ 00:22:48.873 START TEST nvmf_timeout 00:22:48.873 ************************************ 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:48.873 * Looking for test storage... 00:22:48.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:22:48.873 14:38:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:48.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.873 --rc genhtml_branch_coverage=1 00:22:48.873 --rc genhtml_function_coverage=1 00:22:48.873 --rc genhtml_legend=1 00:22:48.873 --rc geninfo_all_blocks=1 00:22:48.873 --rc geninfo_unexecuted_blocks=1 00:22:48.873 00:22:48.873 ' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:48.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.873 --rc genhtml_branch_coverage=1 00:22:48.873 --rc genhtml_function_coverage=1 00:22:48.873 --rc genhtml_legend=1 00:22:48.873 --rc geninfo_all_blocks=1 00:22:48.873 --rc geninfo_unexecuted_blocks=1 00:22:48.873 00:22:48.873 ' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:48.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.873 --rc genhtml_branch_coverage=1 00:22:48.873 --rc genhtml_function_coverage=1 00:22:48.873 --rc genhtml_legend=1 00:22:48.873 --rc geninfo_all_blocks=1 00:22:48.873 --rc geninfo_unexecuted_blocks=1 00:22:48.873 00:22:48.873 ' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:48.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.873 --rc genhtml_branch_coverage=1 00:22:48.873 --rc genhtml_function_coverage=1 00:22:48.873 --rc genhtml_legend=1 00:22:48.873 --rc geninfo_all_blocks=1 00:22:48.873 --rc geninfo_unexecuted_blocks=1 00:22:48.873 00:22:48.873 ' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:48.873 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:48.873 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:48.874 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:49.132 Cannot find device "nvmf_init_br" 00:22:49.132 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:49.132 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:49.132 Cannot find device "nvmf_init_br2" 00:22:49.132 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:49.132 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:49.132 Cannot find device "nvmf_tgt_br" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.133 Cannot find device "nvmf_tgt_br2" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:49.133 Cannot find device "nvmf_init_br" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:49.133 Cannot find device "nvmf_init_br2" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:49.133 Cannot find device "nvmf_tgt_br" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:49.133 Cannot find device "nvmf_tgt_br2" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:49.133 Cannot find device "nvmf_br" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:49.133 Cannot find device "nvmf_init_if" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:49.133 Cannot find device "nvmf_init_if2" 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.133 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:49.133 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:49.392 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:49.392 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:49.392 00:22:49.392 --- 10.0.0.3 ping statistics --- 00:22:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.392 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:49.392 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:49.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:22:49.392 00:22:49.392 --- 10.0.0.4 ping statistics --- 00:22:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.392 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:49.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:49.392 00:22:49.392 --- 10.0.0.1 ping statistics --- 00:22:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.392 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:49.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:22:49.392 00:22:49.392 --- 10.0.0.2 ping statistics --- 00:22:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.392 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=98219 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 98219 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98219 ']' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.392 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:49.392 [2024-12-16 14:38:41.498854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:49.392 [2024-12-16 14:38:41.498955] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.651 [2024-12-16 14:38:41.643130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:49.651 [2024-12-16 14:38:41.661431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.651 [2024-12-16 14:38:41.661514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.651 [2024-12-16 14:38:41.661524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.651 [2024-12-16 14:38:41.661530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.651 [2024-12-16 14:38:41.661536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.651 [2024-12-16 14:38:41.662272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.651 [2024-12-16 14:38:41.662296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.651 [2024-12-16 14:38:41.690149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.651 14:38:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:49.909 [2024-12-16 14:38:42.082738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.909 14:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:50.168 Malloc0 00:22:50.168 14:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.426 14:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.684 14:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:50.943 [2024-12-16 14:38:42.983783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98261 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98261 /var/tmp/bdevperf.sock 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98261 ']' 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.943 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.943 [2024-12-16 14:38:43.057750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:50.943 [2024-12-16 14:38:43.057849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98261 ] 00:22:51.201 [2024-12-16 14:38:43.204631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.201 [2024-12-16 14:38:43.223973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.201 [2024-12-16 14:38:43.252223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:51.201 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.201 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:51.201 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:51.460 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:51.718 NVMe0n1 00:22:51.718 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98277 00:22:51.718 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.718 14:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:51.977 Running I/O for 10 seconds... 00:22:52.912 14:38:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:53.172 7957.00 IOPS, 31.08 MiB/s [2024-12-16T14:38:45.372Z] [2024-12-16 14:38:45.137658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.172 [2024-12-16 14:38:45.137714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.137726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.172 [2024-12-16 14:38:45.137735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.137744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.172 [2024-12-16 14:38:45.137752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.137761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.172 [2024-12-16 14:38:45.137768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.137777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfe250 is same with the state(6) to be set 00:22:53.172 [2024-12-16 14:38:45.137999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.172 [2024-12-16 14:38:45.138016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.172 [2024-12-16 14:38:45.138858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.172 [2024-12-16 14:38:45.138866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.138876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.138885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.138894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.138903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.138912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.138922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.138932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.138950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.138993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.139989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.139997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.173 [2024-12-16 14:38:45.140238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.173 [2024-12-16 14:38:45.140422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.173 [2024-12-16 14:38:45.140457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.174 [2024-12-16 14:38:45.140472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.174 [2024-12-16 14:38:45.140492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.174 [2024-12-16 14:38:45.140510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.174 [2024-12-16 14:38:45.140529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.174 [2024-12-16 14:38:45.140548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.174 [2024-12-16 14:38:45.140567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1f840 is same with the state(6) to be set 00:22:53.174 [2024-12-16 14:38:45.140588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:53.174 [2024-12-16 14:38:45.140595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:53.174 [2024-12-16 14:38:45.140603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:22:53.174 [2024-12-16 14:38:45.140611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.174 [2024-12-16 14:38:45.140875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:53.174 [2024-12-16 14:38:45.140899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfe250 (9): Bad file descriptor 00:22:53.174 [2024-12-16 14:38:45.140982] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.174 [2024-12-16 14:38:45.141002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfe250 with addr=10.0.0.3, port=4420 00:22:53.174 [2024-12-16 14:38:45.141013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfe250 is same with the state(6) to be set 00:22:53.174 [2024-12-16 14:38:45.141028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfe250 (9): Bad file descriptor 00:22:53.174 [2024-12-16 14:38:45.141043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:53.174 [2024-12-16 14:38:45.141051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:53.174 [2024-12-16 14:38:45.141061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:53.174 [2024-12-16 14:38:45.141070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:53.174 [2024-12-16 14:38:45.141081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:53.174 14:38:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:55.044 4618.00 IOPS, 18.04 MiB/s [2024-12-16T14:38:47.244Z] 3078.67 IOPS, 12.03 MiB/s [2024-12-16T14:38:47.244Z] [2024-12-16 14:38:47.141285] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.044 [2024-12-16 14:38:47.141346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfe250 with addr=10.0.0.3, port=4420 00:22:55.044 [2024-12-16 14:38:47.141359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfe250 is same with the state(6) to be set 00:22:55.044 [2024-12-16 14:38:47.141380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfe250 (9): Bad file descriptor 00:22:55.044 [2024-12-16 14:38:47.141398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:55.044 [2024-12-16 14:38:47.141407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:55.044 [2024-12-16 14:38:47.141417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:55.044 [2024-12-16 14:38:47.141426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:55.044 [2024-12-16 14:38:47.141436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:55.044 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:55.044 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.044 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:55.302 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:55.302 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:55.302 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:55.302 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:55.560 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:55.560 14:38:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:57.192 2309.00 IOPS, 9.02 MiB/s [2024-12-16T14:38:49.392Z] 1847.20 IOPS, 7.22 MiB/s [2024-12-16T14:38:49.392Z] [2024-12-16 14:38:49.141658] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:57.192 [2024-12-16 14:38:49.141716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfe250 with addr=10.0.0.3, port=4420 00:22:57.192 [2024-12-16 14:38:49.141731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfe250 is same with the state(6) to be set 00:22:57.192 [2024-12-16 14:38:49.141753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfe250 (9): Bad file descriptor 00:22:57.192 [2024-12-16 14:38:49.141770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:57.192 [2024-12-16 14:38:49.141779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:57.192 [2024-12-16 14:38:49.141789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:57.192 [2024-12-16 14:38:49.141799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:57.192 [2024-12-16 14:38:49.141809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:59.060 1539.33 IOPS, 6.01 MiB/s [2024-12-16T14:38:51.260Z] 1319.43 IOPS, 5.15 MiB/s [2024-12-16T14:38:51.260Z] [2024-12-16 14:38:51.141952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:59.060 [2024-12-16 14:38:51.142004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:59.060 [2024-12-16 14:38:51.142014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:59.060 [2024-12-16 14:38:51.142023] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:59.060 [2024-12-16 14:38:51.142033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:59.994 1154.50 IOPS, 4.51 MiB/s 00:22:59.994 Latency(us) 00:22:59.994 [2024-12-16T14:38:52.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.994 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.994 Verification LBA range: start 0x0 length 0x4000 00:22:59.994 NVMe0n1 : 8.16 1132.48 4.42 15.69 0.00 111346.31 3425.75 7015926.69 00:22:59.994 [2024-12-16T14:38:52.194Z] =================================================================================================================== 00:22:59.994 [2024-12-16T14:38:52.194Z] Total : 1132.48 4.42 15.69 0.00 111346.31 3425.75 7015926.69 00:22:59.994 { 00:22:59.994 "results": [ 00:22:59.994 { 00:22:59.994 "job": "NVMe0n1", 00:22:59.994 "core_mask": "0x4", 00:22:59.994 "workload": "verify", 00:22:59.994 "status": "finished", 00:22:59.994 "verify_range": { 00:22:59.994 "start": 0, 00:22:59.994 "length": 16384 00:22:59.994 }, 00:22:59.994 "queue_depth": 128, 00:22:59.994 "io_size": 4096, 00:22:59.994 "runtime": 8.155538, 00:22:59.994 "iops": 1132.4820018004943, 00:22:59.994 "mibps": 4.423757819533181, 00:22:59.994 "io_failed": 128, 00:22:59.994 "io_timeout": 0, 00:22:59.994 "avg_latency_us": 111346.31074521378, 00:22:59.994 "min_latency_us": 3425.7454545454543, 00:22:59.994 "max_latency_us": 7015926.69090909 00:22:59.994 } 00:22:59.994 ], 00:22:59.994 "core_count": 1 00:22:59.994 } 00:23:00.561 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:00.561 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.561 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:00.819 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:00.819 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:00.819 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:00.819 14:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 98277 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98261 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98261 ']' 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98261 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.077 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98261 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.335 killing process with pid 98261 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98261' 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98261 00:23:01.335 Received shutdown signal, test time was about 9.297969 seconds 00:23:01.335 00:23:01.335 Latency(us) 00:23:01.335 [2024-12-16T14:38:53.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.335 [2024-12-16T14:38:53.535Z] =================================================================================================================== 00:23:01.335 [2024-12-16T14:38:53.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98261 00:23:01.335 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:01.594 [2024-12-16 14:38:53.587395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98394 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98394 /var/tmp/bdevperf.sock 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98394 ']' 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.594 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:01.594 [2024-12-16 14:38:53.648770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:01.594 [2024-12-16 14:38:53.648873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98394 ] 00:23:01.594 [2024-12-16 14:38:53.785789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.852 [2024-12-16 14:38:53.805870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.852 [2024-12-16 14:38:53.835955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:01.852 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.852 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:01.852 14:38:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:02.110 14:38:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:02.368 NVMe0n1 00:23:02.368 14:38:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98410 00:23:02.368 14:38:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.368 14:38:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:02.627 Running I/O for 10 seconds... 00:23:03.564 14:38:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:03.564 7972.00 IOPS, 31.14 MiB/s [2024-12-16T14:38:55.764Z] [2024-12-16 14:38:55.720617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.720952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb35d0 is same with the state(6) to be set 00:23:03.564 [2024-12-16 14:38:55.722892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.722931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.722961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.564 [2024-12-16 14:38:55.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.564 [2024-12-16 14:38:55.723256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.565 [2024-12-16 14:38:55.723886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.565 [2024-12-16 14:38:55.723905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21abac0 is same with the state(6) to be set 00:23:03.565 [2024-12-16 14:38:55.723932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.565 [2024-12-16 14:38:55.723939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.565 [2024-12-16 14:38:55.723946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:23:03.565 [2024-12-16 14:38:55.723954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.565 [2024-12-16 14:38:55.723971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.565 [2024-12-16 14:38:55.723978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:23:03.565 [2024-12-16 14:38:55.723986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.723995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.565 [2024-12-16 14:38:55.724002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.565 [2024-12-16 14:38:55.724009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:23:03.565 [2024-12-16 14:38:55.724018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.724026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.565 [2024-12-16 14:38:55.724033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.565 [2024-12-16 14:38:55.724040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:23:03.565 [2024-12-16 14:38:55.724048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.565 [2024-12-16 14:38:55.724057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.565 [2024-12-16 14:38:55.724063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.565 [2024-12-16 14:38:55.724071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:23:03.565 [2024-12-16 14:38:55.724079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72152 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.566 [2024-12-16 14:38:55.724798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:23:03.566 [2024-12-16 14:38:55.724806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.566 [2024-12-16 14:38:55.724815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.566 [2024-12-16 14:38:55.724821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.724851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.724882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.724912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.724945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.724975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.724990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.724999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72240 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72248 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72264 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72296 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72304 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72312 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72320 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72328 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72336 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72344 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72352 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.567 [2024-12-16 14:38:55.725559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.567 [2024-12-16 14:38:55.725566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.567 [2024-12-16 14:38:55.725573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72368 len:8 PRP1 0x0 PRP2 0x0 00:23:03.567 [2024-12-16 14:38:55.725581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.725590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.725597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.725604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72376 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72384 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72392 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72400 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72408 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72416 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 14:38:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:03.568 [2024-12-16 14:38:55.736870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.736956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.736968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.736981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.736990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.737025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.737045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.737079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.737100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.737133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.737143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.737176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.737186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.568 [2024-12-16 14:38:55.737220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.568 [2024-12-16 14:38:55.737229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.568 [2024-12-16 14:38:55.737239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:23:03.568 [2024-12-16 14:38:55.737251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72544 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72552 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72560 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72568 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72576 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.737784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:03.569 [2024-12-16 14:38:55.737793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:03.569 [2024-12-16 14:38:55.737803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:23:03.569 [2024-12-16 14:38:55.737814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.738002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.569 [2024-12-16 14:38:55.738039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.738055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.569 [2024-12-16 14:38:55.738068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.738080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.569 [2024-12-16 14:38:55.738092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.738106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.569 [2024-12-16 14:38:55.738117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.569 [2024-12-16 14:38:55.738130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:03.569 [2024-12-16 14:38:55.738427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:03.569 [2024-12-16 14:38:55.738494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:03.569 [2024-12-16 14:38:55.738640] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.569 [2024-12-16 14:38:55.738668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:03.569 [2024-12-16 14:38:55.738683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:03.569 [2024-12-16 14:38:55.738706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:03.569 [2024-12-16 14:38:55.738728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:03.569 [2024-12-16 14:38:55.738740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:03.569 [2024-12-16 14:38:55.738764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:03.569 [2024-12-16 14:38:55.738778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:03.569 [2024-12-16 14:38:55.738792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:04.762 4475.50 IOPS, 17.48 MiB/s [2024-12-16T14:38:56.962Z] [2024-12-16 14:38:56.738888] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.762 [2024-12-16 14:38:56.738942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:04.762 [2024-12-16 14:38:56.738962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:04.762 [2024-12-16 14:38:56.738997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:04.762 [2024-12-16 14:38:56.739014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:04.762 [2024-12-16 14:38:56.739024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:04.762 [2024-12-16 14:38:56.739034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:04.762 [2024-12-16 14:38:56.739043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:04.762 [2024-12-16 14:38:56.739053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:04.762 14:38:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:05.020 [2024-12-16 14:38:56.995984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.020 14:38:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98410 00:23:05.631 2983.67 IOPS, 11.65 MiB/s [2024-12-16T14:38:57.831Z] [2024-12-16 14:38:57.758264] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:07.529 2237.75 IOPS, 8.74 MiB/s [2024-12-16T14:39:00.664Z] 3627.00 IOPS, 14.17 MiB/s [2024-12-16T14:39:02.038Z] 4790.50 IOPS, 18.71 MiB/s [2024-12-16T14:39:02.973Z] 5614.14 IOPS, 21.93 MiB/s [2024-12-16T14:39:03.908Z] 6216.88 IOPS, 24.28 MiB/s [2024-12-16T14:39:04.841Z] 6693.22 IOPS, 26.15 MiB/s [2024-12-16T14:39:04.841Z] 7100.70 IOPS, 27.74 MiB/s 00:23:12.641 Latency(us) 00:23:12.641 [2024-12-16T14:39:04.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.641 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:12.641 Verification LBA range: start 0x0 length 0x4000 00:23:12.641 NVMe0n1 : 10.01 7107.18 27.76 0.00 0.00 17984.31 1102.20 3035150.89 00:23:12.641 [2024-12-16T14:39:04.841Z] =================================================================================================================== 00:23:12.641 [2024-12-16T14:39:04.841Z] Total : 7107.18 27.76 0.00 0.00 17984.31 1102.20 3035150.89 00:23:12.641 { 00:23:12.641 "results": [ 00:23:12.641 { 00:23:12.641 "job": "NVMe0n1", 00:23:12.641 "core_mask": "0x4", 00:23:12.641 "workload": "verify", 00:23:12.641 "status": "finished", 00:23:12.641 "verify_range": { 00:23:12.641 "start": 0, 00:23:12.641 "length": 16384 00:23:12.641 }, 00:23:12.641 "queue_depth": 128, 00:23:12.641 "io_size": 4096, 00:23:12.641 "runtime": 10.007765, 00:23:12.641 "iops": 7107.18127374094, 00:23:12.641 "mibps": 27.762426850550547, 00:23:12.641 "io_failed": 0, 00:23:12.641 "io_timeout": 0, 00:23:12.641 "avg_latency_us": 17984.312498539744, 00:23:12.641 "min_latency_us": 1102.1963636363637, 00:23:12.641 "max_latency_us": 3035150.8945454545 00:23:12.641 } 00:23:12.641 ], 00:23:12.641 "core_count": 1 00:23:12.642 } 00:23:12.642 14:39:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98515 00:23:12.642 14:39:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.642 14:39:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:12.642 Running I/O for 10 seconds... 00:23:13.576 14:39:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.837 8084.00 IOPS, 31.58 MiB/s [2024-12-16T14:39:06.037Z] [2024-12-16 14:39:05.890756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.837 [2024-12-16 14:39:05.890812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.890824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.837 [2024-12-16 14:39:05.890833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.890841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.837 [2024-12-16 14:39:05.890848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.890857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:13.837 [2024-12-16 14:39:05.890864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.890872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:13.837 [2024-12-16 14:39:05.891145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.837 [2024-12-16 14:39:05.891164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.837 [2024-12-16 14:39:05.891687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.837 [2024-12-16 14:39:05.891697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.891983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.891991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.838 [2024-12-16 14:39:05.892429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.838 [2024-12-16 14:39:05.892437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.892984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.839 [2024-12-16 14:39:05.893137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.839 [2024-12-16 14:39:05.893145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.840 [2024-12-16 14:39:05.893517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:13.840 [2024-12-16 14:39:05.893535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21acbe0 is same with the state(6) to be set 00:23:13.840 [2024-12-16 14:39:05.893554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:13.840 [2024-12-16 14:39:05.893560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:13.840 [2024-12-16 14:39:05.893567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73880 len:8 PRP1 0x0 PRP2 0x0 00:23:13.840 [2024-12-16 14:39:05.893575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:13.840 [2024-12-16 14:39:05.893795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:13.840 [2024-12-16 14:39:05.893817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:13.840 [2024-12-16 14:39:05.893913] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.840 [2024-12-16 14:39:05.893933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:13.840 [2024-12-16 14:39:05.893942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:13.840 [2024-12-16 14:39:05.893958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:13.840 [2024-12-16 14:39:05.893972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:13.840 [2024-12-16 14:39:05.893980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:13.840 [2024-12-16 14:39:05.893989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:13.840 [2024-12-16 14:39:05.893998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:13.840 [2024-12-16 14:39:05.894008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:13.840 14:39:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:14.774 4554.00 IOPS, 17.79 MiB/s [2024-12-16T14:39:06.974Z] [2024-12-16 14:39:06.894091] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.774 [2024-12-16 14:39:06.894166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:14.774 [2024-12-16 14:39:06.894180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:14.774 [2024-12-16 14:39:06.894199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:14.774 [2024-12-16 14:39:06.894215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:14.774 [2024-12-16 14:39:06.894223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:14.774 [2024-12-16 14:39:06.894232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:14.774 [2024-12-16 14:39:06.894242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:14.774 [2024-12-16 14:39:06.894250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:15.708 3036.00 IOPS, 11.86 MiB/s [2024-12-16T14:39:07.908Z] [2024-12-16 14:39:07.894321] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.708 [2024-12-16 14:39:07.894389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:15.708 [2024-12-16 14:39:07.894418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:15.708 [2024-12-16 14:39:07.894436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:15.708 [2024-12-16 14:39:07.894452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:15.708 [2024-12-16 14:39:07.894473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:15.708 [2024-12-16 14:39:07.894483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:15.708 [2024-12-16 14:39:07.894492] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:15.708 [2024-12-16 14:39:07.894501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:16.901 2277.00 IOPS, 8.89 MiB/s [2024-12-16T14:39:09.101Z] [2024-12-16 14:39:08.897769] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.901 [2024-12-16 14:39:08.897825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a4d0 with addr=10.0.0.3, port=4420 00:23:16.901 [2024-12-16 14:39:08.897853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a4d0 is same with the state(6) to be set 00:23:16.901 [2024-12-16 14:39:08.898092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a4d0 (9): Bad file descriptor 00:23:16.901 [2024-12-16 14:39:08.898345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:16.901 [2024-12-16 14:39:08.898364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:16.901 [2024-12-16 14:39:08.898374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:16.901 [2024-12-16 14:39:08.898384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:16.901 [2024-12-16 14:39:08.898394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:16.901 14:39:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.159 [2024-12-16 14:39:09.189180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.159 14:39:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98515 00:23:17.983 1821.60 IOPS, 7.12 MiB/s [2024-12-16T14:39:10.183Z] [2024-12-16 14:39:09.927849] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:19.853 3018.50 IOPS, 11.79 MiB/s [2024-12-16T14:39:12.988Z] 4143.71 IOPS, 16.19 MiB/s [2024-12-16T14:39:13.922Z] 4990.12 IOPS, 19.49 MiB/s [2024-12-16T14:39:14.855Z] 5657.89 IOPS, 22.10 MiB/s [2024-12-16T14:39:14.855Z] 6197.70 IOPS, 24.21 MiB/s 00:23:22.655 Latency(us) 00:23:22.655 [2024-12-16T14:39:14.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.655 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.655 Verification LBA range: start 0x0 length 0x4000 00:23:22.655 NVMe0n1 : 10.01 6201.92 24.23 4187.91 0.00 12297.23 696.32 3019898.88 00:23:22.655 [2024-12-16T14:39:14.855Z] =================================================================================================================== 00:23:22.655 [2024-12-16T14:39:14.855Z] Total : 6201.92 24.23 4187.91 0.00 12297.23 0.00 3019898.88 00:23:22.655 { 00:23:22.655 "results": [ 00:23:22.655 { 00:23:22.655 "job": "NVMe0n1", 00:23:22.655 "core_mask": "0x4", 00:23:22.655 "workload": "verify", 00:23:22.655 "status": "finished", 00:23:22.655 "verify_range": { 00:23:22.655 "start": 0, 00:23:22.655 "length": 16384 00:23:22.655 }, 00:23:22.655 "queue_depth": 128, 00:23:22.655 "io_size": 4096, 00:23:22.655 "runtime": 10.007385, 00:23:22.655 "iops": 6201.91988216702, 00:23:22.655 "mibps": 24.22624953971492, 00:23:22.655 "io_failed": 41910, 00:23:22.655 "io_timeout": 0, 00:23:22.655 "avg_latency_us": 12297.230231113248, 00:23:22.655 "min_latency_us": 696.32, 00:23:22.655 "max_latency_us": 3019898.88 00:23:22.655 } 00:23:22.655 ], 00:23:22.655 "core_count": 1 00:23:22.655 } 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98394 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98394 ']' 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98394 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.655 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98394 00:23:22.655 killing process with pid 98394 00:23:22.655 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.655 00:23:22.655 Latency(us) 00:23:22.655 [2024-12-16T14:39:14.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.656 [2024-12-16T14:39:14.856Z] =================================================================================================================== 00:23:22.656 [2024-12-16T14:39:14.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.656 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.656 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.656 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98394' 00:23:22.656 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98394 00:23:22.656 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98394 00:23:22.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98628 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98628 /var/tmp/bdevperf.sock 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98628 ']' 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.914 14:39:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:22.914 [2024-12-16 14:39:14.995900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:22.914 [2024-12-16 14:39:14.996002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98628 ] 00:23:23.171 [2024-12-16 14:39:15.140522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.172 [2024-12-16 14:39:15.159206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.172 [2024-12-16 14:39:15.186303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:23.172 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.172 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:23.172 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98632 00:23:23.172 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98628 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:23.172 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:23.429 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:23.687 NVMe0n1 00:23:23.687 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98674 00:23:23.687 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.687 14:39:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:23.945 Running I/O for 10 seconds... 00:23:24.884 14:39:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:24.884 17145.00 IOPS, 66.97 MiB/s [2024-12-16T14:39:17.084Z] [2024-12-16 14:39:17.047902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.047963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.047988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.047996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.884 [2024-12-16 14:39:17.048427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.048976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb7e20 is same with the state(6) to be set 00:23:24.885 [2024-12-16 14:39:17.049029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.885 [2024-12-16 14:39:17.049231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.885 [2024-12-16 14:39:17.049239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.049981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.049990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.050000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.050009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.050019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.050028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.050039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.886 [2024-12-16 14:39:17.050048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.886 [2024-12-16 14:39:17.050058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.887 [2024-12-16 14:39:17.050867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.887 [2024-12-16 14:39:17.050875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.050885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.050893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.050903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.050912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.050932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.050943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.050951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.050961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.050996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.888 [2024-12-16 14:39:17.051590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.888 [2024-12-16 14:39:17.051601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.889 [2024-12-16 14:39:17.051610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.051620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.889 [2024-12-16 14:39:17.051629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.051641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.889 [2024-12-16 14:39:17.051649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.051660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.889 [2024-12-16 14:39:17.051668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.051678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.889 [2024-12-16 14:39:17.051687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.051697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370680 is same with the state(6) to be set 00:23:24.889 [2024-12-16 14:39:17.051708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.889 [2024-12-16 14:39:17.051715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.889 [2024-12-16 14:39:17.051722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37128 len:8 PRP1 0x0 PRP2 0x0 00:23:24.889 [2024-12-16 14:39:17.051731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.889 [2024-12-16 14:39:17.052041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:24.889 [2024-12-16 14:39:17.052121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f090 (9): Bad file descriptor 00:23:24.889 [2024-12-16 14:39:17.052225] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.889 [2024-12-16 14:39:17.052246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f090 with addr=10.0.0.3, port=4420 00:23:24.889 [2024-12-16 14:39:17.052256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f090 is same with the state(6) to be set 00:23:24.889 [2024-12-16 14:39:17.052273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f090 (9): Bad file descriptor 00:23:24.889 [2024-12-16 14:39:17.052289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:24.889 [2024-12-16 14:39:17.052297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:24.889 [2024-12-16 14:39:17.052307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:24.889 [2024-12-16 14:39:17.052317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:24.889 [2024-12-16 14:39:17.052326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:24.889 14:39:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98674 00:23:26.795 9653.00 IOPS, 37.71 MiB/s [2024-12-16T14:39:19.252Z] 6435.33 IOPS, 25.14 MiB/s [2024-12-16T14:39:19.252Z] [2024-12-16 14:39:19.052474] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.052 [2024-12-16 14:39:19.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f090 with addr=10.0.0.3, port=4420 00:23:27.052 [2024-12-16 14:39:19.052548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f090 is same with the state(6) to be set 00:23:27.052 [2024-12-16 14:39:19.052570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f090 (9): Bad file descriptor 00:23:27.053 [2024-12-16 14:39:19.052587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:27.053 [2024-12-16 14:39:19.052595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:27.053 [2024-12-16 14:39:19.052605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:27.053 [2024-12-16 14:39:19.052614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:27.053 [2024-12-16 14:39:19.052624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:28.920 4826.50 IOPS, 18.85 MiB/s [2024-12-16T14:39:21.120Z] 3861.20 IOPS, 15.08 MiB/s [2024-12-16T14:39:21.120Z] [2024-12-16 14:39:21.052803] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.920 [2024-12-16 14:39:21.052862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f090 with addr=10.0.0.3, port=4420 00:23:28.920 [2024-12-16 14:39:21.052876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f090 is same with the state(6) to be set 00:23:28.920 [2024-12-16 14:39:21.052896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f090 (9): Bad file descriptor 00:23:28.920 [2024-12-16 14:39:21.052913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:28.920 [2024-12-16 14:39:21.052922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:28.920 [2024-12-16 14:39:21.052932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:28.920 [2024-12-16 14:39:21.052942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:28.920 [2024-12-16 14:39:21.052951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:30.790 3217.67 IOPS, 12.57 MiB/s [2024-12-16T14:39:23.249Z] 2758.00 IOPS, 10.77 MiB/s [2024-12-16T14:39:23.249Z] [2024-12-16 14:39:23.053105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:31.049 [2024-12-16 14:39:23.053137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:31.049 [2024-12-16 14:39:23.053163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:31.049 [2024-12-16 14:39:23.053171] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:31.049 [2024-12-16 14:39:23.053181] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:31.983 2413.25 IOPS, 9.43 MiB/s 00:23:31.983 Latency(us) 00:23:31.983 [2024-12-16T14:39:24.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.983 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:31.983 NVMe0n1 : 8.13 2373.76 9.27 15.74 0.00 53487.79 6940.86 7015926.69 00:23:31.983 [2024-12-16T14:39:24.183Z] =================================================================================================================== 00:23:31.983 [2024-12-16T14:39:24.183Z] Total : 2373.76 9.27 15.74 0.00 53487.79 6940.86 7015926.69 00:23:31.983 { 00:23:31.983 "results": [ 00:23:31.983 { 00:23:31.983 "job": "NVMe0n1", 00:23:31.983 "core_mask": "0x4", 00:23:31.983 "workload": "randread", 00:23:31.983 "status": "finished", 00:23:31.983 "queue_depth": 128, 00:23:31.983 "io_size": 4096, 00:23:31.983 "runtime": 8.13309, 00:23:31.983 "iops": 2373.759542805994, 00:23:31.983 "mibps": 9.272498214085914, 00:23:31.983 "io_failed": 128, 00:23:31.983 "io_timeout": 0, 00:23:31.983 "avg_latency_us": 53487.78961258151, 00:23:31.983 "min_latency_us": 6940.858181818182, 00:23:31.983 "max_latency_us": 7015926.69090909 00:23:31.983 } 00:23:31.983 ], 00:23:31.983 "core_count": 1 00:23:31.983 } 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.983 Attaching 5 probes... 00:23:31.983 1309.098267: reset bdev controller NVMe0 00:23:31.983 1309.231591: reconnect bdev controller NVMe0 00:23:31.983 3309.419735: reconnect delay bdev controller NVMe0 00:23:31.983 3309.452429: reconnect bdev controller NVMe0 00:23:31.983 5309.760825: reconnect delay bdev controller NVMe0 00:23:31.983 5309.791678: reconnect bdev controller NVMe0 00:23:31.983 7310.140707: reconnect delay bdev controller NVMe0 00:23:31.983 7310.155192: reconnect bdev controller NVMe0 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98632 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98628 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98628 ']' 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98628 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98628 00:23:31.983 killing process with pid 98628 00:23:31.983 Received shutdown signal, test time was about 8.198962 seconds 00:23:31.983 00:23:31.983 Latency(us) 00:23:31.983 [2024-12-16T14:39:24.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.983 [2024-12-16T14:39:24.183Z] =================================================================================================================== 00:23:31.983 [2024-12-16T14:39:24.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98628' 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98628 00:23:31.983 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98628 00:23:32.242 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.501 rmmod nvme_tcp 00:23:32.501 rmmod nvme_fabrics 00:23:32.501 rmmod nvme_keyring 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 98219 ']' 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 98219 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98219 ']' 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98219 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98219 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.501 killing process with pid 98219 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98219' 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98219 00:23:32.501 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98219 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:32.760 00:23:32.760 real 0m44.104s 00:23:32.760 user 2m9.324s 00:23:32.760 sys 0m5.301s 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.760 14:39:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:32.760 ************************************ 00:23:32.760 END TEST nvmf_timeout 00:23:32.760 ************************************ 00:23:33.019 14:39:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:33.019 14:39:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:33.019 00:23:33.019 real 5m38.054s 00:23:33.019 user 15m52.891s 00:23:33.019 sys 1m16.060s 00:23:33.019 14:39:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.019 14:39:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.019 ************************************ 00:23:33.019 END TEST nvmf_host 00:23:33.019 ************************************ 00:23:33.019 14:39:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:33.019 14:39:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:33.019 00:23:33.019 real 14m56.701s 00:23:33.019 user 39m21.386s 00:23:33.019 sys 4m1.100s 00:23:33.019 14:39:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.019 ************************************ 00:23:33.019 END TEST nvmf_tcp 00:23:33.019 14:39:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.019 ************************************ 00:23:33.019 14:39:25 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:23:33.019 14:39:25 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:33.019 14:39:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:33.019 14:39:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.019 14:39:25 -- common/autotest_common.sh@10 -- # set +x 00:23:33.019 ************************************ 00:23:33.019 START TEST nvmf_dif 00:23:33.019 ************************************ 00:23:33.019 14:39:25 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:33.019 * Looking for test storage... 00:23:33.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:33.020 14:39:25 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.020 14:39:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.020 14:39:25 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.279 --rc genhtml_branch_coverage=1 00:23:33.279 --rc genhtml_function_coverage=1 00:23:33.279 --rc genhtml_legend=1 00:23:33.279 --rc geninfo_all_blocks=1 00:23:33.279 --rc geninfo_unexecuted_blocks=1 00:23:33.279 00:23:33.279 ' 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.279 --rc genhtml_branch_coverage=1 00:23:33.279 --rc genhtml_function_coverage=1 00:23:33.279 --rc genhtml_legend=1 00:23:33.279 --rc geninfo_all_blocks=1 00:23:33.279 --rc geninfo_unexecuted_blocks=1 00:23:33.279 00:23:33.279 ' 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.279 --rc genhtml_branch_coverage=1 00:23:33.279 --rc genhtml_function_coverage=1 00:23:33.279 --rc genhtml_legend=1 00:23:33.279 --rc geninfo_all_blocks=1 00:23:33.279 --rc geninfo_unexecuted_blocks=1 00:23:33.279 00:23:33.279 ' 00:23:33.279 14:39:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.279 --rc genhtml_branch_coverage=1 00:23:33.279 --rc genhtml_function_coverage=1 00:23:33.279 --rc genhtml_legend=1 00:23:33.279 --rc geninfo_all_blocks=1 00:23:33.279 --rc geninfo_unexecuted_blocks=1 00:23:33.279 00:23:33.279 ' 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.279 14:39:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.279 14:39:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.279 14:39:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.279 14:39:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.279 14:39:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:33.279 14:39:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:33.279 14:39:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:33.279 14:39:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.280 14:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:33.280 14:39:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:33.280 Cannot find device "nvmf_init_br" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:33.280 Cannot find device "nvmf_init_br2" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:33.280 Cannot find device "nvmf_tgt_br" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.280 Cannot find device "nvmf_tgt_br2" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:33.280 Cannot find device "nvmf_init_br" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:33.280 Cannot find device "nvmf_init_br2" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:33.280 Cannot find device "nvmf_tgt_br" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:33.280 Cannot find device "nvmf_tgt_br2" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:33.280 Cannot find device "nvmf_br" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:33.280 Cannot find device "nvmf_init_if" 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:33.280 14:39:25 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:33.539 Cannot find device "nvmf_init_if2" 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:33.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:33.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:33.539 00:23:33.539 --- 10.0.0.3 ping statistics --- 00:23:33.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.539 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:33.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:33.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:23:33.539 00:23:33.539 --- 10.0.0.4 ping statistics --- 00:23:33.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.539 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:33.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:33.539 00:23:33.539 --- 10.0.0.1 ping statistics --- 00:23:33.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.539 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:33.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:33.539 00:23:33.539 --- 10.0.0.2 ping statistics --- 00:23:33.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.539 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:33.539 14:39:25 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:34.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:34.107 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:34.107 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.107 14:39:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:34.107 14:39:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.107 14:39:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.107 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.107 14:39:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=99171 00:23:34.108 14:39:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:34.108 14:39:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 99171 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 99171 ']' 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.108 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.108 [2024-12-16 14:39:26.185654] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:34.108 [2024-12-16 14:39:26.185759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.368 [2024-12-16 14:39:26.327629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.368 [2024-12-16 14:39:26.345758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.368 [2024-12-16 14:39:26.345831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.368 [2024-12-16 14:39:26.345857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.368 [2024-12-16 14:39:26.345864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.368 [2024-12-16 14:39:26.345870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.368 [2024-12-16 14:39:26.346148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.368 [2024-12-16 14:39:26.373620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:23:34.368 14:39:26 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 14:39:26 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.368 14:39:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:34.368 14:39:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 [2024-12-16 14:39:26.501536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 14:39:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 ************************************ 00:23:34.368 START TEST fio_dif_1_default 00:23:34.368 ************************************ 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 bdev_null0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:34.368 [2024-12-16 14:39:26.545656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.368 { 00:23:34.368 "params": { 00:23:34.368 "name": "Nvme$subsystem", 00:23:34.368 "trtype": "$TEST_TRANSPORT", 00:23:34.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.368 "adrfam": "ipv4", 00:23:34.368 "trsvcid": "$NVMF_PORT", 00:23:34.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.368 "hdgst": ${hdgst:-false}, 00:23:34.368 "ddgst": ${ddgst:-false} 00:23:34.368 }, 00:23:34.368 "method": "bdev_nvme_attach_controller" 00:23:34.368 } 00:23:34.368 EOF 00:23:34.368 )") 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:23:34.368 14:39:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:34.368 "params": { 00:23:34.368 "name": "Nvme0", 00:23:34.368 "trtype": "tcp", 00:23:34.368 "traddr": "10.0.0.3", 00:23:34.368 "adrfam": "ipv4", 00:23:34.368 "trsvcid": "4420", 00:23:34.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.368 "hdgst": false, 00:23:34.368 "ddgst": false 00:23:34.368 }, 00:23:34.368 "method": "bdev_nvme_attach_controller" 00:23:34.368 }' 00:23:34.627 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:34.627 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.628 14:39:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.628 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:34.628 fio-3.35 00:23:34.628 Starting 1 thread 00:23:46.837 00:23:46.837 filename0: (groupid=0, jobs=1): err= 0: pid=99229: Mon Dec 16 14:39:37 2024 00:23:46.837 read: IOPS=9975, BW=39.0MiB/s (40.9MB/s)(390MiB/10001msec) 00:23:46.837 slat (nsec): min=5949, max=55273, avg=7595.82, stdev=3026.13 00:23:46.837 clat (usec): min=319, max=3528, avg=378.46, stdev=42.09 00:23:46.837 lat (usec): min=325, max=3561, avg=386.05, stdev=42.81 00:23:46.837 clat percentiles (usec): 00:23:46.837 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:23:46.837 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 379], 00:23:46.837 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 445], 00:23:46.837 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 562], 99.95th=[ 578], 00:23:46.837 | 99.99th=[ 1434] 00:23:46.837 bw ( KiB/s): min=37536, max=40864, per=100.00%, avg=39936.00, stdev=782.46, samples=19 00:23:46.837 iops : min= 9384, max=10216, avg=9984.00, stdev=195.61, samples=19 00:23:46.837 lat (usec) : 500=99.01%, 750=0.98% 00:23:46.837 lat (msec) : 2=0.01%, 4=0.01% 00:23:46.837 cpu : usr=84.96%, sys=13.18%, ctx=46, majf=0, minf=4 00:23:46.837 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.837 issued rwts: total=99768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.837 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:46.837 00:23:46.837 Run status group 0 (all jobs): 00:23:46.837 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=390MiB (409MB), run=10001-10001msec 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.837 00:23:46.837 real 0m10.894s 00:23:46.837 user 0m9.074s 00:23:46.837 sys 0m1.550s 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 ************************************ 00:23:46.837 END TEST fio_dif_1_default 00:23:46.837 ************************************ 00:23:46.837 14:39:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:46.837 14:39:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.837 14:39:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.837 14:39:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 ************************************ 00:23:46.837 START TEST fio_dif_1_multi_subsystems 00:23:46.837 ************************************ 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.837 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 bdev_null0 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 [2024-12-16 14:39:37.493937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 bdev_null1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.838 { 00:23:46.838 "params": { 00:23:46.838 "name": "Nvme$subsystem", 00:23:46.838 "trtype": "$TEST_TRANSPORT", 00:23:46.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.838 "adrfam": "ipv4", 00:23:46.838 "trsvcid": "$NVMF_PORT", 00:23:46.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.838 "hdgst": ${hdgst:-false}, 00:23:46.838 "ddgst": ${ddgst:-false} 00:23:46.838 }, 00:23:46.838 "method": "bdev_nvme_attach_controller" 00:23:46.838 } 00:23:46.838 EOF 00:23:46.838 )") 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:46.838 { 00:23:46.838 "params": { 00:23:46.838 "name": "Nvme$subsystem", 00:23:46.838 "trtype": "$TEST_TRANSPORT", 00:23:46.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.838 "adrfam": "ipv4", 00:23:46.838 "trsvcid": "$NVMF_PORT", 00:23:46.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.838 "hdgst": ${hdgst:-false}, 00:23:46.838 "ddgst": ${ddgst:-false} 00:23:46.838 }, 00:23:46.838 "method": "bdev_nvme_attach_controller" 00:23:46.838 } 00:23:46.838 EOF 00:23:46.838 )") 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:46.838 "params": { 00:23:46.838 "name": "Nvme0", 00:23:46.838 "trtype": "tcp", 00:23:46.838 "traddr": "10.0.0.3", 00:23:46.838 "adrfam": "ipv4", 00:23:46.838 "trsvcid": "4420", 00:23:46.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.838 "hdgst": false, 00:23:46.838 "ddgst": false 00:23:46.838 }, 00:23:46.838 "method": "bdev_nvme_attach_controller" 00:23:46.838 },{ 00:23:46.838 "params": { 00:23:46.838 "name": "Nvme1", 00:23:46.838 "trtype": "tcp", 00:23:46.838 "traddr": "10.0.0.3", 00:23:46.838 "adrfam": "ipv4", 00:23:46.838 "trsvcid": "4420", 00:23:46.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.838 "hdgst": false, 00:23:46.838 "ddgst": false 00:23:46.838 }, 00:23:46.838 "method": "bdev_nvme_attach_controller" 00:23:46.838 }' 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:46.838 14:39:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.838 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:46.838 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:46.838 fio-3.35 00:23:46.838 Starting 2 threads 00:23:56.817 00:23:56.817 filename0: (groupid=0, jobs=1): err= 0: pid=99390: Mon Dec 16 14:39:48 2024 00:23:56.817 read: IOPS=5352, BW=20.9MiB/s (21.9MB/s)(209MiB/10001msec) 00:23:56.817 slat (nsec): min=6306, max=68420, avg=12385.94, stdev=4288.64 00:23:56.817 clat (usec): min=610, max=1466, avg=713.30, stdev=52.42 00:23:56.817 lat (usec): min=619, max=1492, avg=725.68, stdev=53.07 00:23:56.817 clat percentiles (usec): 00:23:56.817 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 676], 00:23:56.817 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:23:56.817 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 816], 00:23:56.817 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 996], 00:23:56.817 | 99.99th=[ 1205] 00:23:56.817 bw ( KiB/s): min=21024, max=21792, per=50.03%, avg=21426.53, stdev=251.81, samples=19 00:23:56.817 iops : min= 5256, max= 5448, avg=5356.63, stdev=62.95, samples=19 00:23:56.817 lat (usec) : 750=80.78%, 1000=19.17% 00:23:56.817 lat (msec) : 2=0.05% 00:23:56.817 cpu : usr=90.33%, sys=8.43%, ctx=9, majf=0, minf=0 00:23:56.817 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.817 issued rwts: total=53532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.817 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:56.817 filename1: (groupid=0, jobs=1): err= 0: pid=99391: Mon Dec 16 14:39:48 2024 00:23:56.817 read: IOPS=5353, BW=20.9MiB/s (21.9MB/s)(209MiB/10001msec) 00:23:56.817 slat (nsec): min=6323, max=55537, avg=12341.80, stdev=4125.21 00:23:56.817 clat (usec): min=567, max=1240, avg=713.96, stdev=58.04 00:23:56.818 lat (usec): min=577, max=1261, avg=726.30, stdev=59.08 00:23:56.818 clat percentiles (usec): 00:23:56.818 | 1.00th=[ 603], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 668], 00:23:56.818 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 717], 00:23:56.818 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:23:56.818 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 979], 99.95th=[ 1004], 00:23:56.818 | 99.99th=[ 1172] 00:23:56.818 bw ( KiB/s): min=21024, max=21824, per=50.04%, avg=21428.21, stdev=254.49, samples=19 00:23:56.818 iops : min= 5256, max= 5456, avg=5357.05, stdev=63.62, samples=19 00:23:56.818 lat (usec) : 750=78.36%, 1000=21.59% 00:23:56.818 lat (msec) : 2=0.05% 00:23:56.818 cpu : usr=89.91%, sys=8.82%, ctx=24, majf=0, minf=0 00:23:56.818 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.818 issued rwts: total=53536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.818 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:56.818 00:23:56.818 Run status group 0 (all jobs): 00:23:56.818 READ: bw=41.8MiB/s (43.8MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=418MiB (439MB), run=10001-10001msec 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 00:23:56.818 real 0m10.994s 00:23:56.818 user 0m18.680s 00:23:56.818 sys 0m1.963s 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.818 ************************************ 00:23:56.818 14:39:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 END TEST fio_dif_1_multi_subsystems 00:23:56.818 ************************************ 00:23:56.818 14:39:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:56.818 14:39:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:56.818 14:39:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 ************************************ 00:23:56.818 START TEST fio_dif_rand_params 00:23:56.818 ************************************ 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 bdev_null0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.818 [2024-12-16 14:39:48.545653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.818 { 00:23:56.818 "params": { 00:23:56.818 "name": "Nvme$subsystem", 00:23:56.818 "trtype": "$TEST_TRANSPORT", 00:23:56.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.818 "adrfam": "ipv4", 00:23:56.818 "trsvcid": "$NVMF_PORT", 00:23:56.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.818 "hdgst": ${hdgst:-false}, 00:23:56.818 "ddgst": ${ddgst:-false} 00:23:56.818 }, 00:23:56.818 "method": "bdev_nvme_attach_controller" 00:23:56.818 } 00:23:56.818 EOF 00:23:56.818 )") 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:56.818 "params": { 00:23:56.818 "name": "Nvme0", 00:23:56.818 "trtype": "tcp", 00:23:56.818 "traddr": "10.0.0.3", 00:23:56.818 "adrfam": "ipv4", 00:23:56.818 "trsvcid": "4420", 00:23:56.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:56.818 "hdgst": false, 00:23:56.818 "ddgst": false 00:23:56.818 }, 00:23:56.818 "method": "bdev_nvme_attach_controller" 00:23:56.818 }' 00:23:56.818 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:56.819 14:39:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.819 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:56.819 ... 00:23:56.819 fio-3.35 00:23:56.819 Starting 3 threads 00:24:02.089 00:24:02.089 filename0: (groupid=0, jobs=1): err= 0: pid=99549: Mon Dec 16 14:39:54 2024 00:24:02.089 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(173MiB/5003msec) 00:24:02.089 slat (nsec): min=6922, max=56691, avg=15408.22, stdev=4940.84 00:24:02.089 clat (usec): min=10337, max=12476, avg=10842.11, stdev=372.95 00:24:02.089 lat (usec): min=10349, max=12491, avg=10857.52, stdev=373.25 00:24:02.089 clat percentiles (usec): 00:24:02.089 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10552], 00:24:02.089 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10683], 60.00th=[10814], 00:24:02.089 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:24:02.089 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:24:02.089 | 99.99th=[12518] 00:24:02.089 bw ( KiB/s): min=34560, max=36096, per=33.37%, avg=35328.00, stdev=384.00, samples=9 00:24:02.089 iops : min= 270, max= 282, avg=276.00, stdev= 3.00, samples=9 00:24:02.089 lat (msec) : 20=100.00% 00:24:02.089 cpu : usr=91.18%, sys=8.30%, ctx=8, majf=0, minf=0 00:24:02.089 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.089 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:02.089 filename0: (groupid=0, jobs=1): err= 0: pid=99550: Mon Dec 16 14:39:54 2024 00:24:02.089 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(173MiB/5009msec) 00:24:02.089 slat (nsec): min=6668, max=49921, avg=14252.25, stdev=5252.57 00:24:02.089 clat (usec): min=5843, max=12480, avg=10832.96, stdev=438.04 00:24:02.089 lat (usec): min=5852, max=12492, avg=10847.21, stdev=438.36 00:24:02.089 clat percentiles (usec): 00:24:02.089 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10552], 00:24:02.089 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10683], 60.00th=[10814], 00:24:02.089 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:24:02.089 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12518], 99.95th=[12518], 00:24:02.089 | 99.99th=[12518] 00:24:02.089 bw ( KiB/s): min=34560, max=36096, per=33.37%, avg=35328.00, stdev=512.00, samples=10 00:24:02.089 iops : min= 270, max= 282, avg=276.00, stdev= 4.00, samples=10 00:24:02.089 lat (msec) : 10=0.22%, 20=99.78% 00:24:02.089 cpu : usr=91.39%, sys=8.07%, ctx=7, majf=0, minf=3 00:24:02.089 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.089 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:02.089 filename0: (groupid=0, jobs=1): err= 0: pid=99551: Mon Dec 16 14:39:54 2024 00:24:02.089 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(173MiB/5003msec) 00:24:02.089 slat (nsec): min=6573, max=57769, avg=15552.55, stdev=5166.58 00:24:02.089 clat (usec): min=10395, max=12468, avg=10841.12, stdev=371.52 00:24:02.089 lat (usec): min=10408, max=12482, avg=10856.68, stdev=371.82 00:24:02.089 clat percentiles (usec): 00:24:02.089 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10552], 00:24:02.089 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10683], 60.00th=[10814], 00:24:02.089 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:24:02.089 | 99.00th=[12256], 99.50th=[12387], 99.90th=[12387], 99.95th=[12518], 00:24:02.089 | 99.99th=[12518] 00:24:02.089 bw ( KiB/s): min=34560, max=36096, per=33.37%, avg=35328.00, stdev=384.00, samples=9 00:24:02.089 iops : min= 270, max= 282, avg=276.00, stdev= 3.00, samples=9 00:24:02.089 lat (msec) : 20=100.00% 00:24:02.089 cpu : usr=91.28%, sys=8.20%, ctx=17, majf=0, minf=0 00:24:02.089 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.090 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.090 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:02.090 00:24:02.090 Run status group 0 (all jobs): 00:24:02.090 READ: bw=103MiB/s (108MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=518MiB (543MB), run=5003-5009msec 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 bdev_null0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 [2024-12-16 14:39:54.409622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 bdev_null1 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 bdev_null2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:02.350 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:02.350 { 00:24:02.350 "params": { 00:24:02.350 "name": "Nvme$subsystem", 00:24:02.350 "trtype": "$TEST_TRANSPORT", 00:24:02.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "$NVMF_PORT", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.351 "hdgst": ${hdgst:-false}, 00:24:02.351 "ddgst": ${ddgst:-false} 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 } 00:24:02.351 EOF 00:24:02.351 )") 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:02.351 { 00:24:02.351 "params": { 00:24:02.351 "name": "Nvme$subsystem", 00:24:02.351 "trtype": "$TEST_TRANSPORT", 00:24:02.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "$NVMF_PORT", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.351 "hdgst": ${hdgst:-false}, 00:24:02.351 "ddgst": ${ddgst:-false} 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 } 00:24:02.351 EOF 00:24:02.351 )") 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:02.351 { 00:24:02.351 "params": { 00:24:02.351 "name": "Nvme$subsystem", 00:24:02.351 "trtype": "$TEST_TRANSPORT", 00:24:02.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "$NVMF_PORT", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.351 "hdgst": ${hdgst:-false}, 00:24:02.351 "ddgst": ${ddgst:-false} 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 } 00:24:02.351 EOF 00:24:02.351 )") 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:02.351 "params": { 00:24:02.351 "name": "Nvme0", 00:24:02.351 "trtype": "tcp", 00:24:02.351 "traddr": "10.0.0.3", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "4420", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.351 "hdgst": false, 00:24:02.351 "ddgst": false 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 },{ 00:24:02.351 "params": { 00:24:02.351 "name": "Nvme1", 00:24:02.351 "trtype": "tcp", 00:24:02.351 "traddr": "10.0.0.3", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "4420", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.351 "hdgst": false, 00:24:02.351 "ddgst": false 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 },{ 00:24:02.351 "params": { 00:24:02.351 "name": "Nvme2", 00:24:02.351 "trtype": "tcp", 00:24:02.351 "traddr": "10.0.0.3", 00:24:02.351 "adrfam": "ipv4", 00:24:02.351 "trsvcid": "4420", 00:24:02.351 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.351 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.351 "hdgst": false, 00:24:02.351 "ddgst": false 00:24:02.351 }, 00:24:02.351 "method": "bdev_nvme_attach_controller" 00:24:02.351 }' 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:02.351 14:39:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.610 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:02.610 ... 00:24:02.610 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:02.610 ... 00:24:02.610 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:02.610 ... 00:24:02.610 fio-3.35 00:24:02.610 Starting 24 threads 00:24:14.815 00:24:14.815 filename0: (groupid=0, jobs=1): err= 0: pid=99640: Mon Dec 16 14:40:05 2024 00:24:14.815 read: IOPS=209, BW=839KiB/s (859kB/s)(8416KiB/10030msec) 00:24:14.815 slat (usec): min=6, max=8029, avg=38.86, stdev=427.28 00:24:14.815 clat (msec): min=16, max=155, avg=76.05, stdev=22.49 00:24:14.815 lat (msec): min=16, max=155, avg=76.09, stdev=22.50 00:24:14.815 clat percentiles (msec): 00:24:14.815 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:24:14.815 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:24:14.815 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:24:14.815 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 146], 00:24:14.815 | 99.99th=[ 157] 00:24:14.815 bw ( KiB/s): min= 640, max= 1261, per=3.91%, avg=835.05, stdev=145.98, samples=20 00:24:14.815 iops : min= 160, max= 315, avg=208.75, stdev=36.46, samples=20 00:24:14.815 lat (msec) : 20=0.10%, 50=16.68%, 100=68.73%, 250=14.50% 00:24:14.815 cpu : usr=31.33%, sys=1.79%, ctx=842, majf=0, minf=9 00:24:14.815 IO depths : 1=0.1%, 2=1.7%, 4=6.7%, 8=75.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:14.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 complete : 0=0.0%, 4=89.4%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.815 filename0: (groupid=0, jobs=1): err= 0: pid=99641: Mon Dec 16 14:40:05 2024 00:24:14.815 read: IOPS=222, BW=889KiB/s (910kB/s)(8904KiB/10014msec) 00:24:14.815 slat (usec): min=3, max=6029, avg=23.11, stdev=197.12 00:24:14.815 clat (msec): min=18, max=137, avg=71.85, stdev=22.03 00:24:14.815 lat (msec): min=18, max=137, avg=71.87, stdev=22.03 00:24:14.815 clat percentiles (msec): 00:24:14.815 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 00:24:14.815 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:24:14.815 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 113], 00:24:14.815 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 138], 00:24:14.815 | 99.99th=[ 138] 00:24:14.815 bw ( KiB/s): min= 704, max= 1152, per=4.14%, avg=884.42, stdev=139.72, samples=19 00:24:14.815 iops : min= 176, max= 288, avg=221.11, stdev=34.93, samples=19 00:24:14.815 lat (msec) : 20=0.31%, 50=19.23%, 100=67.74%, 250=12.71% 00:24:14.815 cpu : usr=42.38%, sys=2.52%, ctx=1545, majf=0, minf=9 00:24:14.815 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:14.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.815 filename0: (groupid=0, jobs=1): err= 0: pid=99642: Mon Dec 16 14:40:05 2024 00:24:14.815 read: IOPS=218, BW=874KiB/s (895kB/s)(8740KiB/10005msec) 00:24:14.815 slat (usec): min=6, max=8027, avg=20.97, stdev=242.45 00:24:14.815 clat (msec): min=4, max=144, avg=73.12, stdev=23.96 00:24:14.815 lat (msec): min=4, max=144, avg=73.14, stdev=23.96 00:24:14.815 clat percentiles (msec): 00:24:14.815 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:24:14.815 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:24:14.815 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 118], 00:24:14.815 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:14.815 | 99.99th=[ 144] 00:24:14.815 bw ( KiB/s): min= 656, max= 1064, per=3.96%, avg=847.16, stdev=124.19, samples=19 00:24:14.815 iops : min= 164, max= 266, avg=211.79, stdev=31.05, samples=19 00:24:14.815 lat (msec) : 10=1.88%, 20=0.59%, 50=17.94%, 100=67.64%, 250=11.95% 00:24:14.815 cpu : usr=32.15%, sys=1.87%, ctx=957, majf=0, minf=9 00:24:14.815 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=75.2%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:14.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.815 filename0: (groupid=0, jobs=1): err= 0: pid=99643: Mon Dec 16 14:40:05 2024 00:24:14.815 read: IOPS=211, BW=845KiB/s (865kB/s)(8480KiB/10035msec) 00:24:14.815 slat (usec): min=4, max=4027, avg=19.46, stdev=132.22 00:24:14.815 clat (msec): min=13, max=155, avg=75.58, stdev=25.40 00:24:14.815 lat (msec): min=13, max=155, avg=75.60, stdev=25.40 00:24:14.815 clat percentiles (msec): 00:24:14.815 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 45], 20.00th=[ 55], 00:24:14.815 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 80], 00:24:14.815 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 120], 00:24:14.815 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 146], 99.95th=[ 148], 00:24:14.815 | 99.99th=[ 157] 00:24:14.815 bw ( KiB/s): min= 592, max= 1536, per=3.94%, avg=842.80, stdev=220.30, samples=20 00:24:14.815 iops : min= 148, max= 384, avg=210.70, stdev=55.08, samples=20 00:24:14.815 lat (msec) : 20=2.26%, 50=12.83%, 100=66.46%, 250=18.44% 00:24:14.815 cpu : usr=47.84%, sys=2.53%, ctx=1543, majf=0, minf=0 00:24:14.815 IO depths : 1=0.1%, 2=2.7%, 4=11.0%, 8=71.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:14.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 complete : 0=0.0%, 4=90.5%, 8=7.1%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.815 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.815 filename0: (groupid=0, jobs=1): err= 0: pid=99644: Mon Dec 16 14:40:05 2024 00:24:14.815 read: IOPS=223, BW=896KiB/s (917kB/s)(8988KiB/10036msec) 00:24:14.815 slat (usec): min=4, max=4025, avg=20.83, stdev=151.21 00:24:14.816 clat (msec): min=14, max=144, avg=71.29, stdev=22.42 00:24:14.816 lat (msec): min=14, max=144, avg=71.31, stdev=22.42 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 51], 00:24:14.816 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:24:14.816 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 112], 00:24:14.816 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.816 | 99.99th=[ 146] 00:24:14.816 bw ( KiB/s): min= 616, max= 1280, per=4.18%, avg=894.70, stdev=168.01, samples=20 00:24:14.816 iops : min= 154, max= 320, avg=223.65, stdev=42.01, samples=20 00:24:14.816 lat (msec) : 20=0.22%, 50=19.80%, 100=67.02%, 250=12.95% 00:24:14.816 cpu : usr=38.67%, sys=2.12%, ctx=1344, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename0: (groupid=0, jobs=1): err= 0: pid=99645: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=234, BW=937KiB/s (960kB/s)(9372KiB/10001msec) 00:24:14.816 slat (usec): min=6, max=8026, avg=20.14, stdev=234.08 00:24:14.816 clat (usec): min=1062, max=131943, avg=68178.26, stdev=25279.49 00:24:14.816 lat (usec): min=1070, max=131952, avg=68198.41, stdev=25284.71 00:24:14.816 clat percentiles (usec): 00:24:14.816 | 1.00th=[ 1827], 5.00th=[ 11600], 10.00th=[ 42730], 20.00th=[ 47973], 00:24:14.816 | 30.00th=[ 57410], 40.00th=[ 63177], 50.00th=[ 71828], 60.00th=[ 72877], 00:24:14.816 | 70.00th=[ 81265], 80.00th=[ 84411], 90.00th=[ 99091], 95.00th=[108528], 00:24:14.816 | 99.00th=[120062], 99.50th=[122160], 99.90th=[131597], 99.95th=[131597], 00:24:14.816 | 99.99th=[131597] 00:24:14.816 bw ( KiB/s): min= 664, max= 1176, per=4.15%, avg=887.58, stdev=128.46, samples=19 00:24:14.816 iops : min= 166, max= 294, avg=221.89, stdev=32.11, samples=19 00:24:14.816 lat (msec) : 2=1.49%, 4=1.49%, 10=1.79%, 20=0.43%, 50=21.04% 00:24:14.816 lat (msec) : 100=64.02%, 250=9.73% 00:24:14.816 cpu : usr=32.46%, sys=1.82%, ctx=906, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename0: (groupid=0, jobs=1): err= 0: pid=99646: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=233, BW=935KiB/s (958kB/s)(9360KiB/10010msec) 00:24:14.816 slat (usec): min=3, max=8026, avg=22.98, stdev=234.15 00:24:14.816 clat (msec): min=11, max=131, avg=68.33, stdev=22.37 00:24:14.816 lat (msec): min=11, max=131, avg=68.35, stdev=22.37 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:24:14.816 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:24:14.816 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 109], 00:24:14.816 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.816 | 99.99th=[ 132] 00:24:14.816 bw ( KiB/s): min= 664, max= 1304, per=4.36%, avg=931.80, stdev=155.92, samples=20 00:24:14.816 iops : min= 166, max= 326, avg=232.95, stdev=38.98, samples=20 00:24:14.816 lat (msec) : 20=0.81%, 50=27.18%, 100=62.01%, 250=10.00% 00:24:14.816 cpu : usr=31.46%, sys=1.75%, ctx=874, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename0: (groupid=0, jobs=1): err= 0: pid=99647: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=231, BW=926KiB/s (948kB/s)(9264KiB/10009msec) 00:24:14.816 slat (usec): min=5, max=4026, avg=24.39, stdev=174.87 00:24:14.816 clat (msec): min=17, max=126, avg=69.04, stdev=21.77 00:24:14.816 lat (msec): min=17, max=126, avg=69.06, stdev=21.77 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 46], 20.00th=[ 50], 00:24:14.816 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 73], 00:24:14.816 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 111], 00:24:14.816 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 127], 99.95th=[ 127], 00:24:14.816 | 99.99th=[ 127] 00:24:14.816 bw ( KiB/s): min= 664, max= 1256, per=4.32%, avg=922.53, stdev=137.20, samples=19 00:24:14.816 iops : min= 166, max= 314, avg=230.63, stdev=34.30, samples=19 00:24:14.816 lat (msec) : 20=0.60%, 50=21.89%, 100=66.45%, 250=11.05% 00:24:14.816 cpu : usr=43.90%, sys=2.24%, ctx=1618, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename1: (groupid=0, jobs=1): err= 0: pid=99648: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=225, BW=902KiB/s (923kB/s)(9072KiB/10062msec) 00:24:14.816 slat (usec): min=3, max=4026, avg=19.37, stdev=137.93 00:24:14.816 clat (msec): min=2, max=155, avg=70.83, stdev=26.69 00:24:14.816 lat (msec): min=2, max=155, avg=70.85, stdev=26.69 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 35], 20.00th=[ 50], 00:24:14.816 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:24:14.816 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 113], 00:24:14.816 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 146], 00:24:14.816 | 99.99th=[ 157] 00:24:14.816 bw ( KiB/s): min= 584, max= 2048, per=4.21%, avg=900.70, stdev=305.40, samples=20 00:24:14.816 iops : min= 146, max= 512, avg=225.15, stdev=76.34, samples=20 00:24:14.816 lat (msec) : 4=0.71%, 10=2.29%, 20=1.94%, 50=16.89%, 100=64.15% 00:24:14.816 lat (msec) : 250=14.02% 00:24:14.816 cpu : usr=42.73%, sys=2.61%, ctx=1401, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename1: (groupid=0, jobs=1): err= 0: pid=99649: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=220, BW=884KiB/s (905kB/s)(8876KiB/10042msec) 00:24:14.816 slat (usec): min=4, max=8030, avg=21.43, stdev=208.41 00:24:14.816 clat (msec): min=9, max=154, avg=72.23, stdev=24.35 00:24:14.816 lat (msec): min=9, max=155, avg=72.25, stdev=24.35 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 51], 00:24:14.816 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:24:14.816 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 117], 00:24:14.816 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 146], 00:24:14.816 | 99.99th=[ 155] 00:24:14.816 bw ( KiB/s): min= 632, max= 1507, per=4.13%, avg=883.75, stdev=203.81, samples=20 00:24:14.816 iops : min= 158, max= 376, avg=220.90, stdev=50.83, samples=20 00:24:14.816 lat (msec) : 10=0.09%, 20=2.21%, 50=16.99%, 100=66.83%, 250=13.88% 00:24:14.816 cpu : usr=39.94%, sys=2.53%, ctx=1237, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=79.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename1: (groupid=0, jobs=1): err= 0: pid=99650: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=225, BW=903KiB/s (925kB/s)(9040KiB/10007msec) 00:24:14.816 slat (usec): min=6, max=8026, avg=35.89, stdev=385.61 00:24:14.816 clat (msec): min=5, max=143, avg=70.65, stdev=22.42 00:24:14.816 lat (msec): min=5, max=143, avg=70.68, stdev=22.43 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 12], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 48], 00:24:14.816 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:24:14.816 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 109], 00:24:14.816 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.816 | 99.99th=[ 144] 00:24:14.816 bw ( KiB/s): min= 632, max= 1176, per=4.17%, avg=890.95, stdev=145.14, samples=19 00:24:14.816 iops : min= 158, max= 294, avg=222.74, stdev=36.28, samples=19 00:24:14.816 lat (msec) : 10=0.44%, 20=0.84%, 50=23.54%, 100=63.45%, 250=11.73% 00:24:14.816 cpu : usr=31.35%, sys=1.86%, ctx=896, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.816 filename1: (groupid=0, jobs=1): err= 0: pid=99651: Mon Dec 16 14:40:05 2024 00:24:14.816 read: IOPS=233, BW=933KiB/s (955kB/s)(9336KiB/10006msec) 00:24:14.816 slat (usec): min=3, max=8036, avg=27.71, stdev=331.67 00:24:14.816 clat (msec): min=5, max=143, avg=68.49, stdev=23.85 00:24:14.816 lat (msec): min=5, max=143, avg=68.52, stdev=23.85 00:24:14.816 clat percentiles (msec): 00:24:14.816 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 48], 00:24:14.816 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:24:14.816 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 111], 00:24:14.816 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.816 | 99.99th=[ 144] 00:24:14.816 bw ( KiB/s): min= 632, max= 1176, per=4.27%, avg=912.84, stdev=145.74, samples=19 00:24:14.816 iops : min= 158, max= 294, avg=228.21, stdev=36.43, samples=19 00:24:14.816 lat (msec) : 10=1.63%, 20=0.56%, 50=26.52%, 100=59.81%, 250=11.48% 00:24:14.816 cpu : usr=32.31%, sys=1.85%, ctx=879, majf=0, minf=9 00:24:14.816 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:14.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.816 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename1: (groupid=0, jobs=1): err= 0: pid=99652: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=219, BW=880KiB/s (901kB/s)(8832KiB/10037msec) 00:24:14.817 slat (usec): min=8, max=8027, avg=19.09, stdev=170.57 00:24:14.817 clat (msec): min=23, max=155, avg=72.57, stdev=22.76 00:24:14.817 lat (msec): min=23, max=155, avg=72.59, stdev=22.76 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 50], 00:24:14.817 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:24:14.817 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 111], 00:24:14.817 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:24:14.817 | 99.99th=[ 157] 00:24:14.817 bw ( KiB/s): min= 608, max= 1266, per=4.10%, avg=876.90, stdev=169.50, samples=20 00:24:14.817 iops : min= 152, max= 316, avg=219.20, stdev=42.31, samples=20 00:24:14.817 lat (msec) : 50=21.38%, 100=65.94%, 250=12.68% 00:24:14.817 cpu : usr=31.31%, sys=1.82%, ctx=848, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename1: (groupid=0, jobs=1): err= 0: pid=99653: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=227, BW=909KiB/s (930kB/s)(9124KiB/10042msec) 00:24:14.817 slat (nsec): min=5788, max=77578, avg=14503.56, stdev=5139.71 00:24:14.817 clat (msec): min=7, max=150, avg=70.27, stdev=23.71 00:24:14.817 lat (msec): min=7, max=150, avg=70.28, stdev=23.71 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 46], 20.00th=[ 49], 00:24:14.817 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:24:14.817 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 113], 00:24:14.817 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 146], 00:24:14.817 | 99.99th=[ 150] 00:24:14.817 bw ( KiB/s): min= 632, max= 1648, per=4.25%, avg=908.40, stdev=213.82, samples=20 00:24:14.817 iops : min= 158, max= 412, avg=227.10, stdev=53.46, samples=20 00:24:14.817 lat (msec) : 10=0.61%, 20=2.10%, 50=19.51%, 100=65.41%, 250=12.36% 00:24:14.817 cpu : usr=42.79%, sys=2.30%, ctx=1386, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=79.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename1: (groupid=0, jobs=1): err= 0: pid=99654: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=221, BW=886KiB/s (907kB/s)(8880KiB/10028msec) 00:24:14.817 slat (usec): min=8, max=4037, avg=20.49, stdev=136.58 00:24:14.817 clat (msec): min=23, max=147, avg=72.14, stdev=22.38 00:24:14.817 lat (msec): min=23, max=148, avg=72.16, stdev=22.38 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 28], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 51], 00:24:14.817 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:24:14.817 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 114], 00:24:14.817 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 148], 00:24:14.817 | 99.99th=[ 148] 00:24:14.817 bw ( KiB/s): min= 656, max= 1152, per=4.12%, avg=881.60, stdev=145.97, samples=20 00:24:14.817 iops : min= 164, max= 288, avg=220.40, stdev=36.49, samples=20 00:24:14.817 lat (msec) : 50=19.86%, 100=66.22%, 250=13.92% 00:24:14.817 cpu : usr=44.06%, sys=2.26%, ctx=1287, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=78.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename1: (groupid=0, jobs=1): err= 0: pid=99655: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=233, BW=933KiB/s (956kB/s)(9372KiB/10040msec) 00:24:14.817 slat (usec): min=4, max=517, avg=14.22, stdev=11.42 00:24:14.817 clat (msec): min=7, max=134, avg=68.46, stdev=24.01 00:24:14.817 lat (msec): min=7, max=134, avg=68.47, stdev=24.01 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 48], 00:24:14.817 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:24:14.817 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 112], 00:24:14.817 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 129], 99.95th=[ 132], 00:24:14.817 | 99.99th=[ 134] 00:24:14.817 bw ( KiB/s): min= 672, max= 1640, per=4.35%, avg=930.80, stdev=221.80, samples=20 00:24:14.817 iops : min= 168, max= 410, avg=232.70, stdev=55.45, samples=20 00:24:14.817 lat (msec) : 10=0.60%, 20=1.54%, 50=22.45%, 100=64.15%, 250=11.27% 00:24:14.817 cpu : usr=42.38%, sys=2.61%, ctx=1323, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename2: (groupid=0, jobs=1): err= 0: pid=99656: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=193, BW=774KiB/s (793kB/s)(7768KiB/10036msec) 00:24:14.817 slat (usec): min=8, max=8032, avg=31.49, stdev=308.69 00:24:14.817 clat (msec): min=19, max=166, avg=82.36, stdev=23.93 00:24:14.817 lat (msec): min=19, max=166, avg=82.39, stdev=23.93 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 70], 00:24:14.817 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 84], 00:24:14.817 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 114], 95.00th=[ 121], 00:24:14.817 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 167], 99.95th=[ 167], 00:24:14.817 | 99.99th=[ 167] 00:24:14.817 bw ( KiB/s): min= 512, max= 1149, per=3.60%, avg=770.25, stdev=155.54, samples=20 00:24:14.817 iops : min= 128, max= 287, avg=192.55, stdev=38.85, samples=20 00:24:14.817 lat (msec) : 20=0.72%, 50=9.58%, 100=67.82%, 250=21.88% 00:24:14.817 cpu : usr=40.83%, sys=2.60%, ctx=1520, majf=0, minf=9 00:24:14.817 IO depths : 1=0.2%, 2=4.3%, 4=16.6%, 8=65.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=92.2%, 8=4.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename2: (groupid=0, jobs=1): err= 0: pid=99657: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=229, BW=917KiB/s (939kB/s)(9188KiB/10022msec) 00:24:14.817 slat (usec): min=3, max=8034, avg=19.60, stdev=167.36 00:24:14.817 clat (msec): min=22, max=132, avg=69.68, stdev=22.10 00:24:14.817 lat (msec): min=22, max=132, avg=69.70, stdev=22.10 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:24:14.817 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:24:14.817 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 110], 00:24:14.817 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.817 | 99.99th=[ 132] 00:24:14.817 bw ( KiB/s): min= 664, max= 1266, per=4.27%, avg=912.50, stdev=152.55, samples=20 00:24:14.817 iops : min= 166, max= 316, avg=228.10, stdev=38.08, samples=20 00:24:14.817 lat (msec) : 50=25.99%, 100=63.74%, 250=10.27% 00:24:14.817 cpu : usr=31.55%, sys=1.57%, ctx=838, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename2: (groupid=0, jobs=1): err= 0: pid=99658: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=219, BW=876KiB/s (897kB/s)(8816KiB/10059msec) 00:24:14.817 slat (usec): min=5, max=8021, avg=25.85, stdev=282.78 00:24:14.817 clat (usec): min=1455, max=154502, avg=72814.61, stdev=30084.81 00:24:14.817 lat (usec): min=1463, max=154515, avg=72840.46, stdev=30087.68 00:24:14.817 clat percentiles (usec): 00:24:14.817 | 1.00th=[ 1598], 5.00th=[ 8029], 10.00th=[ 25035], 20.00th=[ 51643], 00:24:14.817 | 30.00th=[ 67634], 40.00th=[ 71828], 50.00th=[ 73925], 60.00th=[ 80217], 00:24:14.817 | 70.00th=[ 86508], 80.00th=[ 95945], 90.00th=[110625], 95.00th=[117965], 00:24:14.817 | 99.00th=[132645], 99.50th=[133694], 99.90th=[152044], 99.95th=[154141], 00:24:14.817 | 99.99th=[154141] 00:24:14.817 bw ( KiB/s): min= 592, max= 2544, per=4.10%, avg=875.20, stdev=415.91, samples=20 00:24:14.817 iops : min= 148, max= 636, avg=218.80, stdev=103.98, samples=20 00:24:14.817 lat (msec) : 2=1.45%, 4=2.90%, 10=1.72%, 20=2.31%, 50=10.16% 00:24:14.817 lat (msec) : 100=63.66%, 250=17.79% 00:24:14.817 cpu : usr=40.45%, sys=2.40%, ctx=1358, majf=0, minf=9 00:24:14.817 IO depths : 1=0.3%, 2=3.2%, 4=12.1%, 8=69.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 complete : 0=0.0%, 4=91.0%, 8=6.4%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.817 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.817 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.817 filename2: (groupid=0, jobs=1): err= 0: pid=99660: Mon Dec 16 14:40:05 2024 00:24:14.817 read: IOPS=217, BW=868KiB/s (889kB/s)(8688KiB/10008msec) 00:24:14.817 slat (usec): min=3, max=12023, avg=36.34, stdev=429.35 00:24:14.817 clat (msec): min=9, max=151, avg=73.55, stdev=22.75 00:24:14.817 lat (msec): min=9, max=151, avg=73.59, stdev=22.77 00:24:14.817 clat percentiles (msec): 00:24:14.817 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 52], 00:24:14.817 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:24:14.817 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 116], 00:24:14.817 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 144], 00:24:14.817 | 99.99th=[ 153] 00:24:14.817 bw ( KiB/s): min= 632, max= 1112, per=4.04%, avg=864.80, stdev=134.87, samples=20 00:24:14.817 iops : min= 158, max= 278, avg=216.20, stdev=33.72, samples=20 00:24:14.817 lat (msec) : 10=0.14%, 20=0.78%, 50=18.37%, 100=67.54%, 250=13.17% 00:24:14.817 cpu : usr=31.43%, sys=1.84%, ctx=846, majf=0, minf=9 00:24:14.817 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:14.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.818 filename2: (groupid=0, jobs=1): err= 0: pid=99661: Mon Dec 16 14:40:05 2024 00:24:14.818 read: IOPS=236, BW=946KiB/s (969kB/s)(9464KiB/10006msec) 00:24:14.818 slat (usec): min=3, max=8030, avg=35.22, stdev=395.12 00:24:14.818 clat (msec): min=5, max=131, avg=67.50, stdev=23.03 00:24:14.818 lat (msec): min=5, max=131, avg=67.54, stdev=23.04 00:24:14.818 clat percentiles (msec): 00:24:14.818 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 48], 00:24:14.818 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:24:14.818 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:24:14.818 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 132], 00:24:14.818 | 99.99th=[ 132] 00:24:14.818 bw ( KiB/s): min= 664, max= 1192, per=4.33%, avg=924.63, stdev=148.43, samples=19 00:24:14.818 iops : min= 166, max= 298, avg=231.16, stdev=37.11, samples=19 00:24:14.818 lat (msec) : 10=1.65%, 20=0.51%, 50=26.75%, 100=61.45%, 250=9.64% 00:24:14.818 cpu : usr=31.69%, sys=1.66%, ctx=898, majf=0, minf=9 00:24:14.818 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:14.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.818 filename2: (groupid=0, jobs=1): err= 0: pid=99662: Mon Dec 16 14:40:05 2024 00:24:14.818 read: IOPS=224, BW=898KiB/s (920kB/s)(9000KiB/10022msec) 00:24:14.818 slat (usec): min=3, max=12026, avg=35.46, stdev=430.62 00:24:14.818 clat (msec): min=21, max=155, avg=71.10, stdev=23.09 00:24:14.818 lat (msec): min=21, max=155, avg=71.14, stdev=23.11 00:24:14.818 clat percentiles (msec): 00:24:14.818 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 48], 00:24:14.818 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:24:14.818 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 111], 00:24:14.818 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 144], 00:24:14.818 | 99.99th=[ 157] 00:24:14.818 bw ( KiB/s): min= 640, max= 1328, per=4.18%, avg=893.70, stdev=170.21, samples=20 00:24:14.818 iops : min= 160, max= 332, avg=223.40, stdev=42.51, samples=20 00:24:14.818 lat (msec) : 50=24.49%, 100=62.71%, 250=12.80% 00:24:14.818 cpu : usr=31.19%, sys=2.03%, ctx=851, majf=0, minf=10 00:24:14.818 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:14.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.818 filename2: (groupid=0, jobs=1): err= 0: pid=99663: Mon Dec 16 14:40:05 2024 00:24:14.818 read: IOPS=227, BW=910KiB/s (932kB/s)(9120KiB/10020msec) 00:24:14.818 slat (usec): min=4, max=8027, avg=27.42, stdev=278.36 00:24:14.818 clat (msec): min=22, max=128, avg=70.15, stdev=21.65 00:24:14.818 lat (msec): min=22, max=128, avg=70.18, stdev=21.66 00:24:14.818 clat percentiles (msec): 00:24:14.818 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 49], 00:24:14.818 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:24:14.818 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 112], 00:24:14.818 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:24:14.818 | 99.99th=[ 129] 00:24:14.818 bw ( KiB/s): min= 664, max= 1176, per=4.24%, avg=905.60, stdev=142.96, samples=20 00:24:14.818 iops : min= 166, max= 294, avg=226.40, stdev=35.74, samples=20 00:24:14.818 lat (msec) : 50=22.19%, 100=66.84%, 250=10.96% 00:24:14.818 cpu : usr=40.11%, sys=2.37%, ctx=1391, majf=0, minf=9 00:24:14.818 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:14.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.818 filename2: (groupid=0, jobs=1): err= 0: pid=99664: Mon Dec 16 14:40:05 2024 00:24:14.818 read: IOPS=220, BW=882KiB/s (904kB/s)(8864KiB/10046msec) 00:24:14.818 slat (usec): min=5, max=8026, avg=24.48, stdev=294.74 00:24:14.818 clat (msec): min=11, max=155, avg=72.38, stdev=24.70 00:24:14.818 lat (msec): min=11, max=155, avg=72.40, stdev=24.70 00:24:14.818 clat percentiles (msec): 00:24:14.818 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 45], 20.00th=[ 49], 00:24:14.818 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:24:14.818 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:24:14.818 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:24:14.818 | 99.99th=[ 157] 00:24:14.818 bw ( KiB/s): min= 608, max= 1568, per=4.12%, avg=880.00, stdev=220.41, samples=20 00:24:14.818 iops : min= 152, max= 392, avg=220.00, stdev=55.10, samples=20 00:24:14.818 lat (msec) : 20=1.53%, 50=20.53%, 100=63.76%, 250=14.17% 00:24:14.818 cpu : usr=31.56%, sys=1.83%, ctx=851, majf=0, minf=9 00:24:14.818 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.4%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:14.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.818 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.818 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:14.818 00:24:14.818 Run status group 0 (all jobs): 00:24:14.818 READ: bw=20.9MiB/s (21.9MB/s), 774KiB/s-946KiB/s (793kB/s-969kB/s), io=210MiB (220MB), run=10001-10062msec 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 bdev_null0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.818 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 [2024-12-16 14:40:05.630989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 bdev_null1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.819 { 00:24:14.819 "params": { 00:24:14.819 "name": "Nvme$subsystem", 00:24:14.819 "trtype": "$TEST_TRANSPORT", 00:24:14.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.819 "adrfam": "ipv4", 00:24:14.819 "trsvcid": "$NVMF_PORT", 00:24:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.819 "hdgst": ${hdgst:-false}, 00:24:14.819 "ddgst": ${ddgst:-false} 00:24:14.819 }, 00:24:14.819 "method": "bdev_nvme_attach_controller" 00:24:14.819 } 00:24:14.819 EOF 00:24:14.819 )") 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.819 { 00:24:14.819 "params": { 00:24:14.819 "name": "Nvme$subsystem", 00:24:14.819 "trtype": "$TEST_TRANSPORT", 00:24:14.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.819 "adrfam": "ipv4", 00:24:14.819 "trsvcid": "$NVMF_PORT", 00:24:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.819 "hdgst": ${hdgst:-false}, 00:24:14.819 "ddgst": ${ddgst:-false} 00:24:14.819 }, 00:24:14.819 "method": "bdev_nvme_attach_controller" 00:24:14.819 } 00:24:14.819 EOF 00:24:14.819 )") 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:14.819 "params": { 00:24:14.819 "name": "Nvme0", 00:24:14.819 "trtype": "tcp", 00:24:14.819 "traddr": "10.0.0.3", 00:24:14.819 "adrfam": "ipv4", 00:24:14.819 "trsvcid": "4420", 00:24:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.819 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:14.819 "hdgst": false, 00:24:14.819 "ddgst": false 00:24:14.819 }, 00:24:14.819 "method": "bdev_nvme_attach_controller" 00:24:14.819 },{ 00:24:14.819 "params": { 00:24:14.819 "name": "Nvme1", 00:24:14.819 "trtype": "tcp", 00:24:14.819 "traddr": "10.0.0.3", 00:24:14.819 "adrfam": "ipv4", 00:24:14.819 "trsvcid": "4420", 00:24:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.819 "hdgst": false, 00:24:14.819 "ddgst": false 00:24:14.819 }, 00:24:14.819 "method": "bdev_nvme_attach_controller" 00:24:14.819 }' 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:14.819 14:40:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:14.819 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:14.819 ... 00:24:14.819 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:14.819 ... 00:24:14.819 fio-3.35 00:24:14.819 Starting 4 threads 00:24:20.133 00:24:20.133 filename0: (groupid=0, jobs=1): err= 0: pid=99804: Mon Dec 16 14:40:11 2024 00:24:20.133 read: IOPS=2179, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5002msec) 00:24:20.133 slat (nsec): min=6696, max=45508, avg=11231.65, stdev=4381.26 00:24:20.133 clat (usec): min=840, max=6307, avg=3634.22, stdev=614.00 00:24:20.133 lat (usec): min=848, max=6319, avg=3645.46, stdev=614.20 00:24:20.133 clat percentiles (usec): 00:24:20.133 | 1.00th=[ 1860], 5.00th=[ 2999], 10.00th=[ 3097], 20.00th=[ 3163], 00:24:20.133 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3687], 00:24:20.133 | 70.00th=[ 3720], 80.00th=[ 3916], 90.00th=[ 4555], 95.00th=[ 4883], 00:24:20.133 | 99.00th=[ 5211], 99.50th=[ 5407], 99.90th=[ 5735], 99.95th=[ 5800], 00:24:20.133 | 99.99th=[ 6063] 00:24:20.133 bw ( KiB/s): min=16896, max=18352, per=25.24%, avg=17437.00, stdev=405.99, samples=10 00:24:20.133 iops : min= 2112, max= 2294, avg=2179.60, stdev=50.75, samples=10 00:24:20.133 lat (usec) : 1000=0.02% 00:24:20.133 lat (msec) : 2=1.36%, 4=80.81%, 10=17.82% 00:24:20.133 cpu : usr=91.64%, sys=7.62%, ctx=3, majf=0, minf=0 00:24:20.133 IO depths : 1=0.1%, 2=11.2%, 4=61.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 issued rwts: total=10900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.133 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:20.133 filename0: (groupid=0, jobs=1): err= 0: pid=99805: Mon Dec 16 14:40:11 2024 00:24:20.133 read: IOPS=2153, BW=16.8MiB/s (17.6MB/s)(84.2MiB/5002msec) 00:24:20.133 slat (nsec): min=7029, max=61884, avg=14824.03, stdev=4732.06 00:24:20.133 clat (usec): min=1089, max=6519, avg=3667.13, stdev=599.82 00:24:20.133 lat (usec): min=1097, max=6531, avg=3681.95, stdev=599.68 00:24:20.133 clat percentiles (usec): 00:24:20.133 | 1.00th=[ 1926], 5.00th=[ 3032], 10.00th=[ 3097], 20.00th=[ 3163], 00:24:20.133 | 30.00th=[ 3392], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3687], 00:24:20.133 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 4621], 95.00th=[ 4883], 00:24:20.133 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 5669], 99.95th=[ 5800], 00:24:20.133 | 99.99th=[ 6325] 00:24:20.133 bw ( KiB/s): min=16512, max=18288, per=24.92%, avg=17217.78, stdev=558.25, samples=9 00:24:20.133 iops : min= 2064, max= 2286, avg=2152.22, stdev=69.78, samples=9 00:24:20.133 lat (msec) : 2=1.13%, 4=79.46%, 10=19.41% 00:24:20.133 cpu : usr=91.84%, sys=7.34%, ctx=47, majf=0, minf=10 00:24:20.133 IO depths : 1=0.1%, 2=12.0%, 4=61.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 issued rwts: total=10774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.133 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:20.133 filename1: (groupid=0, jobs=1): err= 0: pid=99806: Mon Dec 16 14:40:11 2024 00:24:20.133 read: IOPS=2123, BW=16.6MiB/s (17.4MB/s)(83.0MiB/5001msec) 00:24:20.133 slat (nsec): min=6579, max=58323, avg=14927.30, stdev=4794.83 00:24:20.133 clat (usec): min=1005, max=6333, avg=3719.28, stdev=631.84 00:24:20.133 lat (usec): min=1018, max=6345, avg=3734.20, stdev=631.21 00:24:20.133 clat percentiles (usec): 00:24:20.133 | 1.00th=[ 1975], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3163], 00:24:20.133 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3687], 00:24:20.133 | 70.00th=[ 3752], 80.00th=[ 4047], 90.00th=[ 4752], 95.00th=[ 4948], 00:24:20.133 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5932], 99.95th=[ 5932], 00:24:20.133 | 99.99th=[ 5997] 00:24:20.133 bw ( KiB/s): min=16256, max=17552, per=24.52%, avg=16935.11, stdev=499.74, samples=9 00:24:20.133 iops : min= 2032, max= 2194, avg=2116.89, stdev=62.47, samples=9 00:24:20.133 lat (msec) : 2=1.02%, 4=77.99%, 10=20.99% 00:24:20.133 cpu : usr=91.86%, sys=7.38%, ctx=8, majf=0, minf=0 00:24:20.133 IO depths : 1=0.1%, 2=12.7%, 4=60.4%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 issued rwts: total=10620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.133 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:20.133 filename1: (groupid=0, jobs=1): err= 0: pid=99807: Mon Dec 16 14:40:11 2024 00:24:20.133 read: IOPS=2179, BW=17.0MiB/s (17.9MB/s)(85.2MiB/5003msec) 00:24:20.133 slat (nsec): min=6951, max=75349, avg=14508.60, stdev=5405.21 00:24:20.133 clat (usec): min=834, max=6290, avg=3623.80, stdev=609.75 00:24:20.133 lat (usec): min=842, max=6304, avg=3638.31, stdev=610.25 00:24:20.133 clat percentiles (usec): 00:24:20.133 | 1.00th=[ 1860], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:24:20.133 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3654], 00:24:20.133 | 70.00th=[ 3720], 80.00th=[ 3884], 90.00th=[ 4555], 95.00th=[ 4817], 00:24:20.133 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5669], 99.95th=[ 5735], 00:24:20.133 | 99.99th=[ 5997] 00:24:20.133 bw ( KiB/s): min=16896, max=18352, per=25.25%, avg=17443.20, stdev=405.46, samples=10 00:24:20.133 iops : min= 2112, max= 2294, avg=2180.40, stdev=50.68, samples=10 00:24:20.133 lat (usec) : 1000=0.02% 00:24:20.133 lat (msec) : 2=1.40%, 4=80.99%, 10=17.59% 00:24:20.133 cpu : usr=91.30%, sys=7.94%, ctx=9, majf=0, minf=0 00:24:20.133 IO depths : 1=0.1%, 2=11.2%, 4=61.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:20.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.133 issued rwts: total=10906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.133 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:20.133 00:24:20.133 Run status group 0 (all jobs): 00:24:20.133 READ: bw=67.5MiB/s (70.7MB/s), 16.6MiB/s-17.0MiB/s (17.4MB/s-17.9MB/s), io=338MiB (354MB), run=5001-5003msec 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.133 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 ************************************ 00:24:20.134 END TEST fio_dif_rand_params 00:24:20.134 ************************************ 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 00:24:20.134 real 0m23.058s 00:24:20.134 user 2m2.933s 00:24:20.134 sys 0m8.533s 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 14:40:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:20.134 14:40:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.134 14:40:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 ************************************ 00:24:20.134 START TEST fio_dif_digest 00:24:20.134 ************************************ 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 bdev_null0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:20.134 [2024-12-16 14:40:11.661580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.134 { 00:24:20.134 "params": { 00:24:20.134 "name": "Nvme$subsystem", 00:24:20.134 "trtype": "$TEST_TRANSPORT", 00:24:20.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.134 "adrfam": "ipv4", 00:24:20.134 "trsvcid": "$NVMF_PORT", 00:24:20.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.134 "hdgst": ${hdgst:-false}, 00:24:20.134 "ddgst": ${ddgst:-false} 00:24:20.134 }, 00:24:20.134 "method": "bdev_nvme_attach_controller" 00:24:20.134 } 00:24:20.134 EOF 00:24:20.134 )") 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:20.134 "params": { 00:24:20.134 "name": "Nvme0", 00:24:20.134 "trtype": "tcp", 00:24:20.134 "traddr": "10.0.0.3", 00:24:20.134 "adrfam": "ipv4", 00:24:20.134 "trsvcid": "4420", 00:24:20.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:20.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:20.134 "hdgst": true, 00:24:20.134 "ddgst": true 00:24:20.134 }, 00:24:20.134 "method": "bdev_nvme_attach_controller" 00:24:20.134 }' 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:20.134 14:40:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:20.134 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:20.134 ... 00:24:20.134 fio-3.35 00:24:20.134 Starting 3 threads 00:24:32.344 00:24:32.344 filename0: (groupid=0, jobs=1): err= 0: pid=99909: Mon Dec 16 14:40:22 2024 00:24:32.344 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10009msec) 00:24:32.344 slat (nsec): min=3557, max=39898, avg=9951.91, stdev=3874.33 00:24:32.344 clat (usec): min=11850, max=18768, avg=12392.85, stdev=539.85 00:24:32.344 lat (usec): min=11858, max=18796, avg=12402.81, stdev=540.17 00:24:32.344 clat percentiles (usec): 00:24:32.344 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:24:32.344 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:24:32.344 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13566], 00:24:32.344 | 99.00th=[14222], 99.50th=[14353], 99.90th=[18744], 99.95th=[18744], 00:24:32.344 | 99.99th=[18744] 00:24:32.344 bw ( KiB/s): min=29952, max=31488, per=33.37%, avg=30956.05, stdev=579.00, samples=19 00:24:32.344 iops : min= 234, max= 246, avg=241.84, stdev= 4.52, samples=19 00:24:32.344 lat (msec) : 20=100.00% 00:24:32.344 cpu : usr=91.93%, sys=7.55%, ctx=19, majf=0, minf=0 00:24:32.344 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.344 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.344 filename0: (groupid=0, jobs=1): err= 0: pid=99910: Mon Dec 16 14:40:22 2024 00:24:32.344 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10002msec) 00:24:32.344 slat (nsec): min=7046, max=54117, avg=14653.36, stdev=4394.26 00:24:32.344 clat (usec): min=10349, max=14510, avg=12377.10, stdev=500.63 00:24:32.344 lat (usec): min=10362, max=14527, avg=12391.76, stdev=501.01 00:24:32.344 clat percentiles (usec): 00:24:32.344 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:24:32.344 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:24:32.344 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13566], 00:24:32.344 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:24:32.344 | 99.99th=[14484] 00:24:32.344 bw ( KiB/s): min=29952, max=31488, per=33.38%, avg=30962.53, stdev=575.44, samples=19 00:24:32.344 iops : min= 234, max= 246, avg=241.89, stdev= 4.50, samples=19 00:24:32.344 lat (msec) : 20=100.00% 00:24:32.344 cpu : usr=91.72%, sys=7.81%, ctx=11, majf=0, minf=0 00:24:32.344 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.344 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.344 filename0: (groupid=0, jobs=1): err= 0: pid=99911: Mon Dec 16 14:40:22 2024 00:24:32.344 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10003msec) 00:24:32.344 slat (nsec): min=7301, max=52890, avg=14513.80, stdev=4416.85 00:24:32.344 clat (usec): min=10363, max=14525, avg=12377.89, stdev=501.26 00:24:32.344 lat (usec): min=10376, max=14539, avg=12392.40, stdev=501.65 00:24:32.344 clat percentiles (usec): 00:24:32.344 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:24:32.344 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:24:32.344 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[13566], 00:24:32.345 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:24:32.345 | 99.99th=[14484] 00:24:32.345 bw ( KiB/s): min=29952, max=31488, per=33.38%, avg=30962.53, stdev=575.44, samples=19 00:24:32.345 iops : min= 234, max= 246, avg=241.89, stdev= 4.50, samples=19 00:24:32.345 lat (msec) : 20=100.00% 00:24:32.345 cpu : usr=91.14%, sys=8.34%, ctx=68, majf=0, minf=0 00:24:32.345 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:32.345 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:32.345 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:32.345 00:24:32.345 Run status group 0 (all jobs): 00:24:32.345 READ: bw=90.6MiB/s (95.0MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=907MiB (951MB), run=10002-10009msec 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.345 ************************************ 00:24:32.345 END TEST fio_dif_digest 00:24:32.345 ************************************ 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.345 00:24:32.345 real 0m10.855s 00:24:32.345 user 0m28.051s 00:24:32.345 sys 0m2.589s 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.345 14:40:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:32.345 14:40:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:32.345 14:40:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.345 rmmod nvme_tcp 00:24:32.345 rmmod nvme_fabrics 00:24:32.345 rmmod nvme_keyring 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 99171 ']' 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 99171 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 99171 ']' 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 99171 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99171 00:24:32.345 killing process with pid 99171 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99171' 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@973 -- # kill 99171 00:24:32.345 14:40:22 nvmf_dif -- common/autotest_common.sh@978 -- # wait 99171 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:32.345 14:40:22 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:32.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:32.345 Waiting for block devices as requested 00:24:32.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:32.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.345 14:40:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:32.345 14:40:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.345 14:40:23 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:32.345 00:24:32.345 real 0m58.505s 00:24:32.345 user 3m45.803s 00:24:32.345 sys 0m19.564s 00:24:32.345 14:40:23 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.345 14:40:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:32.345 ************************************ 00:24:32.345 END TEST nvmf_dif 00:24:32.345 ************************************ 00:24:32.345 14:40:23 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:32.345 14:40:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:32.345 14:40:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.345 14:40:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.345 ************************************ 00:24:32.345 START TEST nvmf_abort_qd_sizes 00:24:32.345 ************************************ 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:32.345 * Looking for test storage... 00:24:32.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.345 --rc genhtml_branch_coverage=1 00:24:32.345 --rc genhtml_function_coverage=1 00:24:32.345 --rc genhtml_legend=1 00:24:32.345 --rc geninfo_all_blocks=1 00:24:32.345 --rc geninfo_unexecuted_blocks=1 00:24:32.345 00:24:32.345 ' 00:24:32.345 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.346 --rc genhtml_branch_coverage=1 00:24:32.346 --rc genhtml_function_coverage=1 00:24:32.346 --rc genhtml_legend=1 00:24:32.346 --rc geninfo_all_blocks=1 00:24:32.346 --rc geninfo_unexecuted_blocks=1 00:24:32.346 00:24:32.346 ' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.346 --rc genhtml_branch_coverage=1 00:24:32.346 --rc genhtml_function_coverage=1 00:24:32.346 --rc genhtml_legend=1 00:24:32.346 --rc geninfo_all_blocks=1 00:24:32.346 --rc geninfo_unexecuted_blocks=1 00:24:32.346 00:24:32.346 ' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.346 --rc genhtml_branch_coverage=1 00:24:32.346 --rc genhtml_function_coverage=1 00:24:32.346 --rc genhtml_legend=1 00:24:32.346 --rc geninfo_all_blocks=1 00:24:32.346 --rc geninfo_unexecuted_blocks=1 00:24:32.346 00:24:32.346 ' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.346 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:32.346 Cannot find device "nvmf_init_br" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:32.346 Cannot find device "nvmf_init_br2" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:32.346 Cannot find device "nvmf_tgt_br" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.346 Cannot find device "nvmf_tgt_br2" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:32.346 Cannot find device "nvmf_init_br" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:32.346 Cannot find device "nvmf_init_br2" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:32.346 Cannot find device "nvmf_tgt_br" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:32.346 Cannot find device "nvmf_tgt_br2" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:32.346 Cannot find device "nvmf_br" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:32.346 Cannot find device "nvmf_init_if" 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:32.346 14:40:23 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:32.346 Cannot find device "nvmf_init_if2" 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.346 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:32.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:24:32.347 00:24:32.347 --- 10.0.0.3 ping statistics --- 00:24:32.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.347 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:32.347 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:32.347 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:24:32.347 00:24:32.347 --- 10.0.0.4 ping statistics --- 00:24:32.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.347 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:32.347 00:24:32.347 --- 10.0.0.1 ping statistics --- 00:24:32.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.347 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:32.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:24:32.347 00:24:32.347 --- 10.0.0.2 ping statistics --- 00:24:32.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.347 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:32.347 14:40:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:32.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:32.915 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:32.915 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.915 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:33.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.174 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=100571 00:24:33.174 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 100571 00:24:33.174 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 100571 ']' 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.175 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:33.175 [2024-12-16 14:40:25.181605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:33.175 [2024-12-16 14:40:25.181703] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.175 [2024-12-16 14:40:25.335686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.175 [2024-12-16 14:40:25.363014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.175 [2024-12-16 14:40:25.363310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.175 [2024-12-16 14:40:25.363548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.175 [2024-12-16 14:40:25.363717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.175 [2024-12-16 14:40:25.363763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.175 [2024-12-16 14:40:25.364838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.175 [2024-12-16 14:40:25.364966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.175 [2024-12-16 14:40:25.365043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.175 [2024-12-16 14:40:25.365041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.434 [2024-12-16 14:40:25.402461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.434 14:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:33.434 ************************************ 00:24:33.434 START TEST spdk_target_abort 00:24:33.434 ************************************ 00:24:33.434 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:24:33.434 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:33.435 spdk_targetn1 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.435 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:33.435 [2024-12-16 14:40:25.628373] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:33.694 [2024-12-16 14:40:25.666836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:33.694 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:33.695 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.695 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.695 14:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.984 Initializing NVMe Controllers 00:24:36.984 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.984 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.984 Initialization complete. Launching workers. 00:24:36.984 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9845, failed: 0 00:24:36.984 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1095, failed to submit 8750 00:24:36.984 success 923, unsuccessful 172, failed 0 00:24:36.984 14:40:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:36.984 14:40:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.272 Initializing NVMe Controllers 00:24:40.272 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.272 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:40.272 Initialization complete. Launching workers. 00:24:40.272 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:24:40.272 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7829 00:24:40.272 success 427, unsuccessful 744, failed 0 00:24:40.272 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.272 14:40:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:43.560 Initializing NVMe Controllers 00:24:43.560 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:43.560 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:43.560 Initialization complete. Launching workers. 00:24:43.560 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31996, failed: 0 00:24:43.560 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2290, failed to submit 29706 00:24:43.560 success 453, unsuccessful 1837, failed 0 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.560 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 100571 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 100571 ']' 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 100571 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.819 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100571 00:24:43.819 killing process with pid 100571 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100571' 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 100571 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 100571 00:24:43.820 ************************************ 00:24:43.820 END TEST spdk_target_abort 00:24:43.820 ************************************ 00:24:43.820 00:24:43.820 real 0m10.430s 00:24:43.820 user 0m40.014s 00:24:43.820 sys 0m2.127s 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.820 14:40:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:44.079 14:40:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:44.079 14:40:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:44.079 14:40:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.079 14:40:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:44.079 ************************************ 00:24:44.079 START TEST kernel_target_abort 00:24:44.079 ************************************ 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:44.079 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:44.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:44.338 Waiting for block devices as requested 00:24:44.338 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:44.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:44.597 No valid GPT data, bailing 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:44.597 No valid GPT data, bailing 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:44.597 No valid GPT data, bailing 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:44.597 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:44.856 No valid GPT data, bailing 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:44.856 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 --hostid=63735ac0-cf43-4c13-880c-ea4676416181 -a 10.0.0.1 -t tcp -s 4420 00:24:44.857 00:24:44.857 Discovery Log Number of Records 2, Generation counter 2 00:24:44.857 =====Discovery Log Entry 0====== 00:24:44.857 trtype: tcp 00:24:44.857 adrfam: ipv4 00:24:44.857 subtype: current discovery subsystem 00:24:44.857 treq: not specified, sq flow control disable supported 00:24:44.857 portid: 1 00:24:44.857 trsvcid: 4420 00:24:44.857 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:44.857 traddr: 10.0.0.1 00:24:44.857 eflags: none 00:24:44.857 sectype: none 00:24:44.857 =====Discovery Log Entry 1====== 00:24:44.857 trtype: tcp 00:24:44.857 adrfam: ipv4 00:24:44.857 subtype: nvme subsystem 00:24:44.857 treq: not specified, sq flow control disable supported 00:24:44.857 portid: 1 00:24:44.857 trsvcid: 4420 00:24:44.857 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:44.857 traddr: 10.0.0.1 00:24:44.857 eflags: none 00:24:44.857 sectype: none 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.857 14:40:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:48.146 Initializing NVMe Controllers 00:24:48.146 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:48.146 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:48.146 Initialization complete. Launching workers. 00:24:48.146 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31464, failed: 0 00:24:48.146 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31464, failed to submit 0 00:24:48.146 success 0, unsuccessful 31464, failed 0 00:24:48.146 14:40:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:48.146 14:40:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:51.433 Initializing NVMe Controllers 00:24:51.433 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:51.433 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:51.433 Initialization complete. Launching workers. 00:24:51.433 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64224, failed: 0 00:24:51.433 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25547, failed to submit 38677 00:24:51.433 success 0, unsuccessful 25547, failed 0 00:24:51.433 14:40:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:51.433 14:40:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:54.719 Initializing NVMe Controllers 00:24:54.719 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:54.719 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:54.719 Initialization complete. Launching workers. 00:24:54.719 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69993, failed: 0 00:24:54.719 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17462, failed to submit 52531 00:24:54.719 success 0, unsuccessful 17462, failed 0 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:54.719 14:40:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:54.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:55.915 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:55.915 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:55.915 00:24:55.915 real 0m11.920s 00:24:55.915 user 0m5.795s 00:24:55.915 sys 0m3.476s 00:24:55.915 14:40:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.915 14:40:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:55.915 ************************************ 00:24:55.915 END TEST kernel_target_abort 00:24:55.915 ************************************ 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.915 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.915 rmmod nvme_tcp 00:24:55.915 rmmod nvme_fabrics 00:24:55.915 rmmod nvme_keyring 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 100571 ']' 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 100571 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 100571 ']' 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 100571 00:24:56.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (100571) - No such process 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 100571 is not found' 00:24:56.174 Process with pid 100571 is not found 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:56.174 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:56.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:56.433 Waiting for block devices as requested 00:24:56.433 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.692 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:56.692 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:56.951 00:24:56.951 real 0m25.348s 00:24:56.951 user 0m46.982s 00:24:56.951 sys 0m7.043s 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.951 14:40:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:56.951 ************************************ 00:24:56.951 END TEST nvmf_abort_qd_sizes 00:24:56.951 ************************************ 00:24:56.951 14:40:49 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:56.951 14:40:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:56.951 14:40:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.951 14:40:49 -- common/autotest_common.sh@10 -- # set +x 00:24:56.951 ************************************ 00:24:56.951 START TEST keyring_file 00:24:56.951 ************************************ 00:24:56.951 14:40:49 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:56.951 * Looking for test storage... 00:24:56.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:56.951 14:40:49 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:56.951 14:40:49 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:24:56.951 14:40:49 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.211 --rc genhtml_branch_coverage=1 00:24:57.211 --rc genhtml_function_coverage=1 00:24:57.211 --rc genhtml_legend=1 00:24:57.211 --rc geninfo_all_blocks=1 00:24:57.211 --rc geninfo_unexecuted_blocks=1 00:24:57.211 00:24:57.211 ' 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.211 --rc genhtml_branch_coverage=1 00:24:57.211 --rc genhtml_function_coverage=1 00:24:57.211 --rc genhtml_legend=1 00:24:57.211 --rc geninfo_all_blocks=1 00:24:57.211 --rc geninfo_unexecuted_blocks=1 00:24:57.211 00:24:57.211 ' 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.211 --rc genhtml_branch_coverage=1 00:24:57.211 --rc genhtml_function_coverage=1 00:24:57.211 --rc genhtml_legend=1 00:24:57.211 --rc geninfo_all_blocks=1 00:24:57.211 --rc geninfo_unexecuted_blocks=1 00:24:57.211 00:24:57.211 ' 00:24:57.211 14:40:49 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.211 --rc genhtml_branch_coverage=1 00:24:57.211 --rc genhtml_function_coverage=1 00:24:57.211 --rc genhtml_legend=1 00:24:57.211 --rc geninfo_all_blocks=1 00:24:57.211 --rc geninfo_unexecuted_blocks=1 00:24:57.211 00:24:57.211 ' 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.211 14:40:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.211 14:40:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.211 14:40:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.211 14:40:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.211 14:40:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:57.211 14:40:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.211 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:57.211 14:40:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.azIJXKC773 00:24:57.211 14:40:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:57.211 14:40:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.azIJXKC773 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.azIJXKC773 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.azIJXKC773 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S43uJ5Rve9 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:57.212 14:40:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S43uJ5Rve9 00:24:57.212 14:40:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S43uJ5Rve9 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.S43uJ5Rve9 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=101463 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:57.212 14:40:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 101463 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101463 ']' 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.212 14:40:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.471 [2024-12-16 14:40:49.444982] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:57.471 [2024-12-16 14:40:49.445109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101463 ] 00:24:57.471 [2024-12-16 14:40:49.596022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.471 [2024-12-16 14:40:49.621580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.471 [2024-12-16 14:40:49.666155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:57.731 14:40:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.731 [2024-12-16 14:40:49.814847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.731 null0 00:24:57.731 [2024-12-16 14:40:49.846816] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.731 [2024-12-16 14:40:49.847027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.731 14:40:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.731 14:40:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.731 [2024-12-16 14:40:49.878818] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:57.731 request: 00:24:57.731 { 00:24:57.731 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.731 "secure_channel": false, 00:24:57.731 "listen_address": { 00:24:57.731 "trtype": "tcp", 00:24:57.732 "traddr": "127.0.0.1", 00:24:57.732 "trsvcid": "4420" 00:24:57.732 }, 00:24:57.732 "method": "nvmf_subsystem_add_listener", 00:24:57.732 "req_id": 1 00:24:57.732 } 00:24:57.732 Got JSON-RPC error response 00:24:57.732 response: 00:24:57.732 { 00:24:57.732 "code": -32602, 00:24:57.732 "message": "Invalid parameters" 00:24:57.732 } 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.732 14:40:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=101474 00:24:57.732 14:40:49 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:57.732 14:40:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 101474 /var/tmp/bperf.sock 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101474 ']' 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.732 14:40:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.024 [2024-12-16 14:40:49.942906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:58.024 [2024-12-16 14:40:49.943034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101474 ] 00:24:58.024 [2024-12-16 14:40:50.098729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.024 [2024-12-16 14:40:50.119994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.024 [2024-12-16 14:40:50.151219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:58.024 14:40:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.024 14:40:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:58.024 14:40:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:24:58.024 14:40:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:24:58.592 14:40:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.S43uJ5Rve9 00:24:58.592 14:40:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.S43uJ5Rve9 00:24:58.592 14:40:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:58.592 14:40:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:58.592 14:40:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.592 14:40:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.592 14:40:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.850 14:40:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.azIJXKC773 == \/\t\m\p\/\t\m\p\.\a\z\I\J\X\K\C\7\7\3 ]] 00:24:58.850 14:40:51 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:58.850 14:40:51 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:58.850 14:40:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.850 14:40:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.850 14:40:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.108 14:40:51 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.S43uJ5Rve9 == \/\t\m\p\/\t\m\p\.\S\4\3\u\J\5\R\v\e\9 ]] 00:24:59.108 14:40:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:59.108 14:40:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.108 14:40:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.108 14:40:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.108 14:40:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.108 14:40:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:59.675 14:40:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:59.675 14:40:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.675 14:40:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:59.675 14:40:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:59.675 14:40:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:59.933 [2024-12-16 14:40:52.036070] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.933 nvme0n1 00:24:59.933 14:40:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:59.933 14:40:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.933 14:40:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.933 14:40:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.933 14:40:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.191 14:40:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:00.191 14:40:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.191 14:40:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.449 14:40:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:00.449 14:40:52 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.708 Running I/O for 1 seconds... 00:25:01.643 13827.00 IOPS, 54.01 MiB/s 00:25:01.643 Latency(us) 00:25:01.643 [2024-12-16T14:40:53.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.643 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:01.643 nvme0n1 : 1.01 13872.09 54.19 0.00 0.00 9203.38 4230.05 15490.33 00:25:01.643 [2024-12-16T14:40:53.843Z] =================================================================================================================== 00:25:01.643 [2024-12-16T14:40:53.843Z] Total : 13872.09 54.19 0.00 0.00 9203.38 4230.05 15490.33 00:25:01.643 { 00:25:01.643 "results": [ 00:25:01.643 { 00:25:01.643 "job": "nvme0n1", 00:25:01.643 "core_mask": "0x2", 00:25:01.643 "workload": "randrw", 00:25:01.643 "percentage": 50, 00:25:01.643 "status": "finished", 00:25:01.643 "queue_depth": 128, 00:25:01.643 "io_size": 4096, 00:25:01.643 "runtime": 1.006049, 00:25:01.643 "iops": 13872.087741253159, 00:25:01.643 "mibps": 54.18784273927015, 00:25:01.643 "io_failed": 0, 00:25:01.643 "io_timeout": 0, 00:25:01.643 "avg_latency_us": 9203.381899476277, 00:25:01.643 "min_latency_us": 4230.050909090909, 00:25:01.643 "max_latency_us": 15490.327272727272 00:25:01.643 } 00:25:01.643 ], 00:25:01.643 "core_count": 1 00:25:01.643 } 00:25:01.643 14:40:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:01.643 14:40:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:01.901 14:40:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:01.901 14:40:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.901 14:40:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.901 14:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.901 14:40:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.901 14:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.159 14:40:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:02.159 14:40:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:02.159 14:40:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:02.159 14:40:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.159 14:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.159 14:40:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.159 14:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:02.418 14:40:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:02.418 14:40:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.418 14:40:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.418 14:40:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.677 [2024-12-16 14:40:54.741279] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:02.677 [2024-12-16 14:40:54.741992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f030 (107): Transport endpoint is not connected 00:25:02.677 [2024-12-16 14:40:54.742983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f030 (9): Bad file descriptor 00:25:02.677 [2024-12-16 14:40:54.743980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:02.677 [2024-12-16 14:40:54.743998] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:02.677 [2024-12-16 14:40:54.744022] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:02.677 [2024-12-16 14:40:54.744032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:02.677 request: 00:25:02.677 { 00:25:02.677 "name": "nvme0", 00:25:02.677 "trtype": "tcp", 00:25:02.677 "traddr": "127.0.0.1", 00:25:02.677 "adrfam": "ipv4", 00:25:02.677 "trsvcid": "4420", 00:25:02.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:02.677 "prchk_reftag": false, 00:25:02.677 "prchk_guard": false, 00:25:02.677 "hdgst": false, 00:25:02.677 "ddgst": false, 00:25:02.677 "psk": "key1", 00:25:02.677 "allow_unrecognized_csi": false, 00:25:02.677 "method": "bdev_nvme_attach_controller", 00:25:02.677 "req_id": 1 00:25:02.677 } 00:25:02.677 Got JSON-RPC error response 00:25:02.677 response: 00:25:02.677 { 00:25:02.677 "code": -5, 00:25:02.677 "message": "Input/output error" 00:25:02.677 } 00:25:02.677 14:40:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:02.677 14:40:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:02.677 14:40:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:02.677 14:40:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:02.677 14:40:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:02.677 14:40:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.677 14:40:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.677 14:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.677 14:40:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.677 14:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.936 14:40:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:02.936 14:40:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:02.936 14:40:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:02.936 14:40:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.936 14:40:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.936 14:40:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.936 14:40:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.194 14:40:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:03.194 14:40:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:03.194 14:40:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:03.452 14:40:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:03.452 14:40:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:03.710 14:40:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:03.710 14:40:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:03.710 14:40:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.968 14:40:56 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:03.968 14:40:56 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.azIJXKC773 00:25:03.968 14:40:56 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.968 14:40:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:03.968 14:40:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:04.226 [2024-12-16 14:40:56.342407] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.azIJXKC773': 0100660 00:25:04.226 [2024-12-16 14:40:56.342479] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:04.226 request: 00:25:04.226 { 00:25:04.226 "name": "key0", 00:25:04.226 "path": "/tmp/tmp.azIJXKC773", 00:25:04.226 "method": "keyring_file_add_key", 00:25:04.226 "req_id": 1 00:25:04.226 } 00:25:04.226 Got JSON-RPC error response 00:25:04.226 response: 00:25:04.226 { 00:25:04.226 "code": -1, 00:25:04.226 "message": "Operation not permitted" 00:25:04.226 } 00:25:04.226 14:40:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:04.226 14:40:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.226 14:40:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.226 14:40:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.226 14:40:56 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.azIJXKC773 00:25:04.226 14:40:56 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:04.226 14:40:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.azIJXKC773 00:25:04.484 14:40:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.azIJXKC773 00:25:04.485 14:40:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:04.485 14:40:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:04.485 14:40:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.485 14:40:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.485 14:40:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.485 14:40:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:04.743 14:40:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:04.743 14:40:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.743 14:40:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.743 14:40:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.002 [2024-12-16 14:40:57.054599] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.azIJXKC773': No such file or directory 00:25:05.002 [2024-12-16 14:40:57.054650] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:05.002 [2024-12-16 14:40:57.054682] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:05.002 [2024-12-16 14:40:57.054690] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:05.002 [2024-12-16 14:40:57.054699] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.002 [2024-12-16 14:40:57.054706] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:05.002 request: 00:25:05.002 { 00:25:05.002 "name": "nvme0", 00:25:05.002 "trtype": "tcp", 00:25:05.002 "traddr": "127.0.0.1", 00:25:05.002 "adrfam": "ipv4", 00:25:05.002 "trsvcid": "4420", 00:25:05.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.002 "prchk_reftag": false, 00:25:05.002 "prchk_guard": false, 00:25:05.002 "hdgst": false, 00:25:05.002 "ddgst": false, 00:25:05.002 "psk": "key0", 00:25:05.002 "allow_unrecognized_csi": false, 00:25:05.002 "method": "bdev_nvme_attach_controller", 00:25:05.002 "req_id": 1 00:25:05.002 } 00:25:05.002 Got JSON-RPC error response 00:25:05.002 response: 00:25:05.002 { 00:25:05.002 "code": -19, 00:25:05.002 "message": "No such device" 00:25:05.002 } 00:25:05.002 14:40:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:05.002 14:40:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:05.002 14:40:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:05.002 14:40:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:05.002 14:40:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:05.002 14:40:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:05.260 14:40:57 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q0qRB8dmsa 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:05.260 14:40:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q0qRB8dmsa 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q0qRB8dmsa 00:25:05.260 14:40:57 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.q0qRB8dmsa 00:25:05.260 14:40:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.q0qRB8dmsa 00:25:05.260 14:40:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.q0qRB8dmsa 00:25:05.519 14:40:57 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.519 14:40:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.777 nvme0n1 00:25:06.035 14:40:57 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:06.035 14:40:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.035 14:40:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.035 14:40:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.035 14:40:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.035 14:40:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.035 14:40:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:06.035 14:40:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:06.035 14:40:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:06.294 14:40:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:06.294 14:40:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:06.294 14:40:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.294 14:40:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.294 14:40:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.552 14:40:58 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:06.552 14:40:58 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:06.552 14:40:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.552 14:40:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.552 14:40:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.552 14:40:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.552 14:40:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.811 14:40:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:06.811 14:40:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:06.811 14:40:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:07.378 14:40:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:07.378 14:40:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:07.378 14:40:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.378 14:40:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:07.378 14:40:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.q0qRB8dmsa 00:25:07.378 14:40:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.q0qRB8dmsa 00:25:07.636 14:40:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.S43uJ5Rve9 00:25:07.636 14:40:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.S43uJ5Rve9 00:25:07.894 14:41:00 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:07.894 14:41:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.152 nvme0n1 00:25:08.152 14:41:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:08.152 14:41:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:08.720 14:41:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:08.720 "subsystems": [ 00:25:08.720 { 00:25:08.720 "subsystem": "keyring", 00:25:08.720 "config": [ 00:25:08.720 { 00:25:08.720 "method": "keyring_file_add_key", 00:25:08.720 "params": { 00:25:08.720 "name": "key0", 00:25:08.720 "path": "/tmp/tmp.q0qRB8dmsa" 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "keyring_file_add_key", 00:25:08.720 "params": { 00:25:08.720 "name": "key1", 00:25:08.720 "path": "/tmp/tmp.S43uJ5Rve9" 00:25:08.720 } 00:25:08.720 } 00:25:08.720 ] 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "subsystem": "iobuf", 00:25:08.720 "config": [ 00:25:08.720 { 00:25:08.720 "method": "iobuf_set_options", 00:25:08.720 "params": { 00:25:08.720 "small_pool_count": 8192, 00:25:08.720 "large_pool_count": 1024, 00:25:08.720 "small_bufsize": 8192, 00:25:08.720 "large_bufsize": 135168, 00:25:08.720 "enable_numa": false 00:25:08.720 } 00:25:08.720 } 00:25:08.720 ] 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "subsystem": "sock", 00:25:08.720 "config": [ 00:25:08.720 { 00:25:08.720 "method": "sock_set_default_impl", 00:25:08.720 "params": { 00:25:08.720 "impl_name": "uring" 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "sock_impl_set_options", 00:25:08.720 "params": { 00:25:08.720 "impl_name": "ssl", 00:25:08.720 "recv_buf_size": 4096, 00:25:08.720 "send_buf_size": 4096, 00:25:08.720 "enable_recv_pipe": true, 00:25:08.720 "enable_quickack": false, 00:25:08.720 "enable_placement_id": 0, 00:25:08.720 "enable_zerocopy_send_server": true, 00:25:08.720 "enable_zerocopy_send_client": false, 00:25:08.720 "zerocopy_threshold": 0, 00:25:08.720 "tls_version": 0, 00:25:08.720 "enable_ktls": false 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "sock_impl_set_options", 00:25:08.720 "params": { 00:25:08.720 "impl_name": "posix", 00:25:08.720 "recv_buf_size": 2097152, 00:25:08.720 "send_buf_size": 2097152, 00:25:08.720 "enable_recv_pipe": true, 00:25:08.720 "enable_quickack": false, 00:25:08.720 "enable_placement_id": 0, 00:25:08.720 "enable_zerocopy_send_server": true, 00:25:08.720 "enable_zerocopy_send_client": false, 00:25:08.720 "zerocopy_threshold": 0, 00:25:08.720 "tls_version": 0, 00:25:08.720 "enable_ktls": false 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "sock_impl_set_options", 00:25:08.720 "params": { 00:25:08.720 "impl_name": "uring", 00:25:08.720 "recv_buf_size": 2097152, 00:25:08.720 "send_buf_size": 2097152, 00:25:08.720 "enable_recv_pipe": true, 00:25:08.720 "enable_quickack": false, 00:25:08.720 "enable_placement_id": 0, 00:25:08.720 "enable_zerocopy_send_server": false, 00:25:08.720 "enable_zerocopy_send_client": false, 00:25:08.720 "zerocopy_threshold": 0, 00:25:08.720 "tls_version": 0, 00:25:08.720 "enable_ktls": false 00:25:08.720 } 00:25:08.720 } 00:25:08.720 ] 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "subsystem": "vmd", 00:25:08.720 "config": [] 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "subsystem": "accel", 00:25:08.720 "config": [ 00:25:08.720 { 00:25:08.720 "method": "accel_set_options", 00:25:08.720 "params": { 00:25:08.720 "small_cache_size": 128, 00:25:08.720 "large_cache_size": 16, 00:25:08.720 "task_count": 2048, 00:25:08.720 "sequence_count": 2048, 00:25:08.720 "buf_count": 2048 00:25:08.720 } 00:25:08.720 } 00:25:08.720 ] 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "subsystem": "bdev", 00:25:08.720 "config": [ 00:25:08.720 { 00:25:08.720 "method": "bdev_set_options", 00:25:08.720 "params": { 00:25:08.720 "bdev_io_pool_size": 65535, 00:25:08.720 "bdev_io_cache_size": 256, 00:25:08.720 "bdev_auto_examine": true, 00:25:08.720 "iobuf_small_cache_size": 128, 00:25:08.720 "iobuf_large_cache_size": 16 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "bdev_raid_set_options", 00:25:08.720 "params": { 00:25:08.720 "process_window_size_kb": 1024, 00:25:08.720 "process_max_bandwidth_mb_sec": 0 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "bdev_iscsi_set_options", 00:25:08.720 "params": { 00:25:08.720 "timeout_sec": 30 00:25:08.720 } 00:25:08.720 }, 00:25:08.720 { 00:25:08.720 "method": "bdev_nvme_set_options", 00:25:08.720 "params": { 00:25:08.720 "action_on_timeout": "none", 00:25:08.720 "timeout_us": 0, 00:25:08.720 "timeout_admin_us": 0, 00:25:08.720 "keep_alive_timeout_ms": 10000, 00:25:08.721 "arbitration_burst": 0, 00:25:08.721 "low_priority_weight": 0, 00:25:08.721 "medium_priority_weight": 0, 00:25:08.721 "high_priority_weight": 0, 00:25:08.721 "nvme_adminq_poll_period_us": 10000, 00:25:08.721 "nvme_ioq_poll_period_us": 0, 00:25:08.721 "io_queue_requests": 512, 00:25:08.721 "delay_cmd_submit": true, 00:25:08.721 "transport_retry_count": 4, 00:25:08.721 "bdev_retry_count": 3, 00:25:08.721 "transport_ack_timeout": 0, 00:25:08.721 "ctrlr_loss_timeout_sec": 0, 00:25:08.721 "reconnect_delay_sec": 0, 00:25:08.721 "fast_io_fail_timeout_sec": 0, 00:25:08.721 "disable_auto_failback": false, 00:25:08.721 "generate_uuids": false, 00:25:08.721 "transport_tos": 0, 00:25:08.721 "nvme_error_stat": false, 00:25:08.721 "rdma_srq_size": 0, 00:25:08.721 "io_path_stat": false, 00:25:08.721 "allow_accel_sequence": false, 00:25:08.721 "rdma_max_cq_size": 0, 00:25:08.721 "rdma_cm_event_timeout_ms": 0, 00:25:08.721 "dhchap_digests": [ 00:25:08.721 "sha256", 00:25:08.721 "sha384", 00:25:08.721 "sha512" 00:25:08.721 ], 00:25:08.721 "dhchap_dhgroups": [ 00:25:08.721 "null", 00:25:08.721 "ffdhe2048", 00:25:08.721 "ffdhe3072", 00:25:08.721 "ffdhe4096", 00:25:08.721 "ffdhe6144", 00:25:08.721 "ffdhe8192" 00:25:08.721 ], 00:25:08.721 "rdma_umr_per_io": false 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "bdev_nvme_attach_controller", 00:25:08.721 "params": { 00:25:08.721 "name": "nvme0", 00:25:08.721 "trtype": "TCP", 00:25:08.721 "adrfam": "IPv4", 00:25:08.721 "traddr": "127.0.0.1", 00:25:08.721 "trsvcid": "4420", 00:25:08.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.721 "prchk_reftag": false, 00:25:08.721 "prchk_guard": false, 00:25:08.721 "ctrlr_loss_timeout_sec": 0, 00:25:08.721 "reconnect_delay_sec": 0, 00:25:08.721 "fast_io_fail_timeout_sec": 0, 00:25:08.721 "psk": "key0", 00:25:08.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.721 "hdgst": false, 00:25:08.721 "ddgst": false, 00:25:08.721 "multipath": "multipath" 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "bdev_nvme_set_hotplug", 00:25:08.721 "params": { 00:25:08.721 "period_us": 100000, 00:25:08.721 "enable": false 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "bdev_wait_for_examine" 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "nbd", 00:25:08.721 "config": [] 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }' 00:25:08.721 14:41:00 keyring_file -- keyring/file.sh@115 -- # killprocess 101474 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101474 ']' 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101474 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101474 00:25:08.721 killing process with pid 101474 00:25:08.721 Received shutdown signal, test time was about 1.000000 seconds 00:25:08.721 00:25:08.721 Latency(us) 00:25:08.721 [2024-12-16T14:41:00.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.721 [2024-12-16T14:41:00.921Z] =================================================================================================================== 00:25:08.721 [2024-12-16T14:41:00.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101474' 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@973 -- # kill 101474 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@978 -- # wait 101474 00:25:08.721 14:41:00 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:08.721 14:41:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=101713 00:25:08.721 14:41:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 101713 /var/tmp/bperf.sock 00:25:08.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 101713 ']' 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.721 14:41:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.721 14:41:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:08.721 "subsystems": [ 00:25:08.721 { 00:25:08.721 "subsystem": "keyring", 00:25:08.721 "config": [ 00:25:08.721 { 00:25:08.721 "method": "keyring_file_add_key", 00:25:08.721 "params": { 00:25:08.721 "name": "key0", 00:25:08.721 "path": "/tmp/tmp.q0qRB8dmsa" 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "keyring_file_add_key", 00:25:08.721 "params": { 00:25:08.721 "name": "key1", 00:25:08.721 "path": "/tmp/tmp.S43uJ5Rve9" 00:25:08.721 } 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "iobuf", 00:25:08.721 "config": [ 00:25:08.721 { 00:25:08.721 "method": "iobuf_set_options", 00:25:08.721 "params": { 00:25:08.721 "small_pool_count": 8192, 00:25:08.721 "large_pool_count": 1024, 00:25:08.721 "small_bufsize": 8192, 00:25:08.721 "large_bufsize": 135168, 00:25:08.721 "enable_numa": false 00:25:08.721 } 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "sock", 00:25:08.721 "config": [ 00:25:08.721 { 00:25:08.721 "method": "sock_set_default_impl", 00:25:08.721 "params": { 00:25:08.721 "impl_name": "uring" 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "sock_impl_set_options", 00:25:08.721 "params": { 00:25:08.721 "impl_name": "ssl", 00:25:08.721 "recv_buf_size": 4096, 00:25:08.721 "send_buf_size": 4096, 00:25:08.721 "enable_recv_pipe": true, 00:25:08.721 "enable_quickack": false, 00:25:08.721 "enable_placement_id": 0, 00:25:08.721 "enable_zerocopy_send_server": true, 00:25:08.721 "enable_zerocopy_send_client": false, 00:25:08.721 "zerocopy_threshold": 0, 00:25:08.721 "tls_version": 0, 00:25:08.721 "enable_ktls": false 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "sock_impl_set_options", 00:25:08.721 "params": { 00:25:08.721 "impl_name": "posix", 00:25:08.721 "recv_buf_size": 2097152, 00:25:08.721 "send_buf_size": 2097152, 00:25:08.721 "enable_recv_pipe": true, 00:25:08.721 "enable_quickack": false, 00:25:08.721 "enable_placement_id": 0, 00:25:08.721 "enable_zerocopy_send_server": true, 00:25:08.721 "enable_zerocopy_send_client": false, 00:25:08.721 "zerocopy_threshold": 0, 00:25:08.721 "tls_version": 0, 00:25:08.721 "enable_ktls": false 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "sock_impl_set_options", 00:25:08.721 "params": { 00:25:08.721 "impl_name": "uring", 00:25:08.721 "recv_buf_size": 2097152, 00:25:08.721 "send_buf_size": 2097152, 00:25:08.721 "enable_recv_pipe": true, 00:25:08.721 "enable_quickack": false, 00:25:08.721 "enable_placement_id": 0, 00:25:08.721 "enable_zerocopy_send_server": false, 00:25:08.721 "enable_zerocopy_send_client": false, 00:25:08.721 "zerocopy_threshold": 0, 00:25:08.721 "tls_version": 0, 00:25:08.721 "enable_ktls": false 00:25:08.721 } 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "vmd", 00:25:08.721 "config": [] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "accel", 00:25:08.721 "config": [ 00:25:08.721 { 00:25:08.721 "method": "accel_set_options", 00:25:08.721 "params": { 00:25:08.721 "small_cache_size": 128, 00:25:08.721 "large_cache_size": 16, 00:25:08.721 "task_count": 2048, 00:25:08.721 "sequence_count": 2048, 00:25:08.721 "buf_count": 2048 00:25:08.721 } 00:25:08.721 } 00:25:08.721 ] 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "subsystem": "bdev", 00:25:08.721 "config": [ 00:25:08.721 { 00:25:08.721 "method": "bdev_set_options", 00:25:08.721 "params": { 00:25:08.721 "bdev_io_pool_size": 65535, 00:25:08.721 "bdev_io_cache_size": 256, 00:25:08.721 "bdev_auto_examine": true, 00:25:08.721 "iobuf_small_cache_size": 128, 00:25:08.721 "iobuf_large_cache_size": 16 00:25:08.721 } 00:25:08.721 }, 00:25:08.721 { 00:25:08.721 "method": "bdev_raid_set_options", 00:25:08.721 "params": { 00:25:08.722 "process_window_size_kb": 1024, 00:25:08.722 "process_max_bandwidth_mb_sec": 0 00:25:08.722 } 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "method": "bdev_iscsi_set_options", 00:25:08.722 "params": { 00:25:08.722 "timeout_sec": 30 00:25:08.722 } 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "method": "bdev_nvme_set_options", 00:25:08.722 "params": { 00:25:08.722 "action_on_timeout": "none", 00:25:08.722 "timeout_us": 0, 00:25:08.722 "timeout_admin_us": 0, 00:25:08.722 "keep_alive_timeout_ms": 10000, 00:25:08.722 "arbitration_burst": 0, 00:25:08.722 "low_priority_weight": 0, 00:25:08.722 "medium_priority_weight": 0, 00:25:08.722 "high_priority_weight": 0, 00:25:08.722 "nvme_adminq_poll_period_us": 10000, 00:25:08.722 "nvme_ioq_poll_period_us": 0, 00:25:08.722 "io_queue_requests": 512, 00:25:08.722 "delay_cmd_submit": true, 00:25:08.722 "transport_retry_count": 4, 00:25:08.722 "bdev_retry_count": 3, 00:25:08.722 "transport_ack_timeout": 0, 00:25:08.722 "ctrlr_loss_timeout_sec": 0, 00:25:08.722 "reconnect_delay_sec": 0, 00:25:08.722 "fast_io_fail_timeout_sec": 0, 00:25:08.722 "disable_auto_failback": false, 00:25:08.722 "generate_uuids": false, 00:25:08.722 "transport_tos": 0, 00:25:08.722 "nvme_error_stat": false, 00:25:08.722 "rdma_srq_size": 0, 00:25:08.722 "io_path_stat": false, 00:25:08.722 "allow_accel_sequence": false, 00:25:08.722 "rdma_max_cq_size": 0, 00:25:08.722 "rdma_cm_event_timeout_ms": 0, 00:25:08.722 "dhchap_digests": [ 00:25:08.722 "sha256", 00:25:08.722 "sha384", 00:25:08.722 "sha512" 00:25:08.722 ], 00:25:08.722 "dhchap_dhgroups": [ 00:25:08.722 "null", 00:25:08.722 "ffdhe2048", 00:25:08.722 "ffdhe3072", 00:25:08.722 "ffdhe4096", 00:25:08.722 "ffdhe6144", 00:25:08.722 "ffdhe8192" 00:25:08.722 ], 00:25:08.722 "rdma_umr_per_io": false 00:25:08.722 } 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "method": "bdev_nvme_attach_controller", 00:25:08.722 "params": { 00:25:08.722 "name": "nvme0", 00:25:08.722 "trtype": "TCP", 00:25:08.722 "adrfam": "IPv4", 00:25:08.722 "traddr": "127.0.0.1", 00:25:08.722 "trsvcid": "4420", 00:25:08.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.722 "prchk_reftag": false, 00:25:08.722 "prchk_guard": false, 00:25:08.722 "ctrlr_loss_timeout_sec": 0, 00:25:08.722 "reconnect_delay_sec": 0, 00:25:08.722 "fast_io_fail_timeout_sec": 0, 00:25:08.722 "psk": "key0", 00:25:08.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.722 "hdgst": false, 00:25:08.722 "ddgst": false, 00:25:08.722 "multipath": "multipath" 00:25:08.722 } 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "method": "bdev_nvme_set_hotplug", 00:25:08.722 "params": { 00:25:08.722 "period_us": 100000, 00:25:08.722 "enable": false 00:25:08.722 } 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "method": "bdev_wait_for_examine" 00:25:08.722 } 00:25:08.722 ] 00:25:08.722 }, 00:25:08.722 { 00:25:08.722 "subsystem": "nbd", 00:25:08.722 "config": [] 00:25:08.722 } 00:25:08.722 ] 00:25:08.722 }' 00:25:08.722 14:41:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.722 14:41:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.722 14:41:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:08.722 [2024-12-16 14:41:00.853084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:08.722 [2024-12-16 14:41:00.853566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101713 ] 00:25:08.984 [2024-12-16 14:41:01.001041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.984 [2024-12-16 14:41:01.019523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.984 [2024-12-16 14:41:01.130093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:08.984 [2024-12-16 14:41:01.166546] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.922 14:41:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.922 14:41:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:09.922 14:41:01 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:09.922 14:41:01 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:09.922 14:41:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.922 14:41:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:09.922 14:41:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:09.922 14:41:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:09.922 14:41:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.922 14:41:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.922 14:41:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.922 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.180 14:41:02 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:10.180 14:41:02 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:10.180 14:41:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:10.180 14:41:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.180 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.180 14:41:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.180 14:41:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:10.438 14:41:02 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:10.438 14:41:02 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:10.438 14:41:02 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:10.438 14:41:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:11.005 14:41:02 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:11.005 14:41:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:11.005 14:41:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.q0qRB8dmsa /tmp/tmp.S43uJ5Rve9 00:25:11.005 14:41:02 keyring_file -- keyring/file.sh@20 -- # killprocess 101713 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101713 ']' 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101713 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101713 00:25:11.005 killing process with pid 101713 00:25:11.005 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.005 00:25:11.005 Latency(us) 00:25:11.005 [2024-12-16T14:41:03.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.005 [2024-12-16T14:41:03.205Z] =================================================================================================================== 00:25:11.005 [2024-12-16T14:41:03.205Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101713' 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@973 -- # kill 101713 00:25:11.005 14:41:02 keyring_file -- common/autotest_common.sh@978 -- # wait 101713 00:25:11.005 14:41:03 keyring_file -- keyring/file.sh@21 -- # killprocess 101463 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 101463 ']' 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 101463 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101463 00:25:11.005 killing process with pid 101463 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101463' 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@973 -- # kill 101463 00:25:11.005 14:41:03 keyring_file -- common/autotest_common.sh@978 -- # wait 101463 00:25:11.264 00:25:11.264 real 0m14.256s 00:25:11.264 user 0m36.950s 00:25:11.264 sys 0m2.644s 00:25:11.264 14:41:03 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.264 14:41:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:11.264 ************************************ 00:25:11.264 END TEST keyring_file 00:25:11.264 ************************************ 00:25:11.264 14:41:03 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:25:11.264 14:41:03 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:11.264 14:41:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.264 14:41:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.264 14:41:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.264 ************************************ 00:25:11.264 START TEST keyring_linux 00:25:11.264 ************************************ 00:25:11.264 14:41:03 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:11.264 Joined session keyring: 722624460 00:25:11.264 * Looking for test storage... 00:25:11.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:11.264 14:41:03 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.264 14:41:03 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.264 14:41:03 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.523 14:41:03 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.523 --rc genhtml_branch_coverage=1 00:25:11.523 --rc genhtml_function_coverage=1 00:25:11.523 --rc genhtml_legend=1 00:25:11.523 --rc geninfo_all_blocks=1 00:25:11.523 --rc geninfo_unexecuted_blocks=1 00:25:11.523 00:25:11.523 ' 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.523 --rc genhtml_branch_coverage=1 00:25:11.523 --rc genhtml_function_coverage=1 00:25:11.523 --rc genhtml_legend=1 00:25:11.523 --rc geninfo_all_blocks=1 00:25:11.523 --rc geninfo_unexecuted_blocks=1 00:25:11.523 00:25:11.523 ' 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.523 --rc genhtml_branch_coverage=1 00:25:11.523 --rc genhtml_function_coverage=1 00:25:11.523 --rc genhtml_legend=1 00:25:11.523 --rc geninfo_all_blocks=1 00:25:11.523 --rc geninfo_unexecuted_blocks=1 00:25:11.523 00:25:11.523 ' 00:25:11.523 14:41:03 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.523 --rc genhtml_branch_coverage=1 00:25:11.523 --rc genhtml_function_coverage=1 00:25:11.523 --rc genhtml_legend=1 00:25:11.523 --rc geninfo_all_blocks=1 00:25:11.523 --rc geninfo_unexecuted_blocks=1 00:25:11.523 00:25:11.523 ' 00:25:11.523 14:41:03 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:11.523 14:41:03 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:11.523 14:41:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:11.523 14:41:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.523 14:41:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.523 14:41:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.523 14:41:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:63735ac0-cf43-4c13-880c-ea4676416181 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=63735ac0-cf43-4c13-880c-ea4676416181 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:11.524 14:41:03 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.524 14:41:03 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.524 14:41:03 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.524 14:41:03 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.524 14:41:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.524 14:41:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.524 14:41:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.524 14:41:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:11.524 14:41:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:11.524 /tmp/:spdk-test:key0 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:11.524 14:41:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:11.524 14:41:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:11.524 /tmp/:spdk-test:key1 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101836 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:11.524 14:41:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101836 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 101836 ']' 00:25:11.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.524 14:41:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:11.524 [2024-12-16 14:41:03.693792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:11.524 [2024-12-16 14:41:03.694029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101836 ] 00:25:11.783 [2024-12-16 14:41:03.832101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.783 [2024-12-16 14:41:03.852280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.783 [2024-12-16 14:41:03.888338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:12.042 14:41:03 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.042 14:41:03 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:12.042 14:41:03 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:12.042 14:41:03 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.042 14:41:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 [2024-12-16 14:41:03.999688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.042 null0 00:25:12.042 [2024-12-16 14:41:04.031672] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.042 [2024-12-16 14:41:04.031828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.042 14:41:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:12.042 463220127 00:25:12.042 14:41:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:12.042 691309663 00:25:12.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.042 14:41:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101841 00:25:12.042 14:41:04 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:12.042 14:41:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101841 /var/tmp/bperf.sock 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 101841 ']' 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.042 14:41:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:12.042 [2024-12-16 14:41:04.118714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:12.042 [2024-12-16 14:41:04.119011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101841 ] 00:25:12.300 [2024-12-16 14:41:04.265165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.300 [2024-12-16 14:41:04.283925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.300 14:41:04 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.300 14:41:04 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:12.300 14:41:04 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:12.300 14:41:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:12.558 14:41:04 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:12.558 14:41:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:12.817 [2024-12-16 14:41:04.861740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:12.817 14:41:04 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:12.817 14:41:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:13.075 [2024-12-16 14:41:05.097029] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.075 nvme0n1 00:25:13.075 14:41:05 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:13.075 14:41:05 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:13.075 14:41:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:13.075 14:41:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:13.075 14:41:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:13.075 14:41:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.334 14:41:05 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:13.334 14:41:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:13.334 14:41:05 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:13.334 14:41:05 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:13.334 14:41:05 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:13.334 14:41:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.334 14:41:05 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@25 -- # sn=463220127 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@26 -- # [[ 463220127 == \4\6\3\2\2\0\1\2\7 ]] 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 463220127 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:13.592 14:41:05 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.850 Running I/O for 1 seconds... 00:25:14.785 12466.00 IOPS, 48.70 MiB/s 00:25:14.785 Latency(us) 00:25:14.785 [2024-12-16T14:41:06.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:14.785 nvme0n1 : 1.01 12463.86 48.69 0.00 0.00 10212.01 7030.23 15966.95 00:25:14.785 [2024-12-16T14:41:06.985Z] =================================================================================================================== 00:25:14.785 [2024-12-16T14:41:06.985Z] Total : 12463.86 48.69 0.00 0.00 10212.01 7030.23 15966.95 00:25:14.785 { 00:25:14.785 "results": [ 00:25:14.785 { 00:25:14.785 "job": "nvme0n1", 00:25:14.785 "core_mask": "0x2", 00:25:14.785 "workload": "randread", 00:25:14.785 "status": "finished", 00:25:14.785 "queue_depth": 128, 00:25:14.785 "io_size": 4096, 00:25:14.785 "runtime": 1.010441, 00:25:14.785 "iops": 12463.86478775109, 00:25:14.785 "mibps": 48.686971827152696, 00:25:14.785 "io_failed": 0, 00:25:14.785 "io_timeout": 0, 00:25:14.785 "avg_latency_us": 10212.006202087572, 00:25:14.785 "min_latency_us": 7030.225454545454, 00:25:14.785 "max_latency_us": 15966.952727272726 00:25:14.785 } 00:25:14.785 ], 00:25:14.785 "core_count": 1 00:25:14.785 } 00:25:14.785 14:41:06 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:14.785 14:41:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:15.043 14:41:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:15.043 14:41:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:15.043 14:41:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:15.043 14:41:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:15.043 14:41:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.043 14:41:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:15.303 14:41:07 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:15.303 14:41:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:15.303 14:41:07 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:15.303 14:41:07 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:15.303 14:41:07 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:25:15.303 14:41:07 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:15.304 14:41:07 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:15.304 14:41:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.304 14:41:07 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:15.304 14:41:07 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.304 14:41:07 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:15.304 14:41:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:15.562 [2024-12-16 14:41:07.605052] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:15.562 [2024-12-16 14:41:07.605068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f8d0 (107): Transport endpoint is not connected 00:25:15.562 [2024-12-16 14:41:07.606058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107f8d0 (9): Bad file descriptor 00:25:15.562 [2024-12-16 14:41:07.607055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:15.562 [2024-12-16 14:41:07.607082] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:15.562 [2024-12-16 14:41:07.607093] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:15.562 [2024-12-16 14:41:07.607103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:15.562 request: 00:25:15.562 { 00:25:15.562 "name": "nvme0", 00:25:15.562 "trtype": "tcp", 00:25:15.562 "traddr": "127.0.0.1", 00:25:15.562 "adrfam": "ipv4", 00:25:15.562 "trsvcid": "4420", 00:25:15.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.562 "prchk_reftag": false, 00:25:15.562 "prchk_guard": false, 00:25:15.562 "hdgst": false, 00:25:15.562 "ddgst": false, 00:25:15.562 "psk": ":spdk-test:key1", 00:25:15.562 "allow_unrecognized_csi": false, 00:25:15.562 "method": "bdev_nvme_attach_controller", 00:25:15.562 "req_id": 1 00:25:15.562 } 00:25:15.562 Got JSON-RPC error response 00:25:15.562 response: 00:25:15.562 { 00:25:15.562 "code": -5, 00:25:15.562 "message": "Input/output error" 00:25:15.562 } 00:25:15.562 14:41:07 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:25:15.562 14:41:07 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:15.562 14:41:07 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:15.562 14:41:07 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@33 -- # sn=463220127 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 463220127 00:25:15.562 1 links removed 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@33 -- # sn=691309663 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 691309663 00:25:15.562 1 links removed 00:25:15.562 14:41:07 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101841 00:25:15.562 14:41:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 101841 ']' 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 101841 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101841 00:25:15.563 killing process with pid 101841 00:25:15.563 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.563 00:25:15.563 Latency(us) 00:25:15.563 [2024-12-16T14:41:07.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.563 [2024-12-16T14:41:07.763Z] =================================================================================================================== 00:25:15.563 [2024-12-16T14:41:07.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101841' 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 101841 00:25:15.563 14:41:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 101841 00:25:15.821 14:41:07 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101836 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 101836 ']' 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 101836 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101836 00:25:15.821 killing process with pid 101836 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101836' 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@973 -- # kill 101836 00:25:15.821 14:41:07 keyring_linux -- common/autotest_common.sh@978 -- # wait 101836 00:25:16.080 ************************************ 00:25:16.080 END TEST keyring_linux 00:25:16.080 ************************************ 00:25:16.080 00:25:16.080 real 0m4.680s 00:25:16.080 user 0m9.586s 00:25:16.080 sys 0m1.290s 00:25:16.080 14:41:08 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.080 14:41:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:16.080 14:41:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:16.080 14:41:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:16.080 14:41:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:16.080 14:41:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:16.080 14:41:08 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:16.080 14:41:08 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:16.080 14:41:08 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:16.080 14:41:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.080 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:25:16.080 14:41:08 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:16.080 14:41:08 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:16.080 14:41:08 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:16.080 14:41:08 -- common/autotest_common.sh@10 -- # set +x 00:25:17.984 INFO: APP EXITING 00:25:17.984 INFO: killing all VMs 00:25:17.984 INFO: killing vhost app 00:25:17.984 INFO: EXIT DONE 00:25:18.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.552 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:18.552 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:19.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.489 Cleaning 00:25:19.489 Removing: /var/run/dpdk/spdk0/config 00:25:19.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:19.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:19.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:19.489 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:19.489 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:19.489 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:19.489 Removing: /var/run/dpdk/spdk1/config 00:25:19.489 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:19.489 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:19.489 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:19.489 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:19.489 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:19.489 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:19.489 Removing: /var/run/dpdk/spdk2/config 00:25:19.489 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:19.489 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:19.489 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:19.489 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:19.489 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:19.489 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:19.489 Removing: /var/run/dpdk/spdk3/config 00:25:19.489 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:19.489 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:19.489 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:19.489 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:19.489 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:19.489 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:19.489 Removing: /var/run/dpdk/spdk4/config 00:25:19.490 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:19.490 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:19.490 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:19.490 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:19.490 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:19.490 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:19.490 Removing: /dev/shm/nvmf_trace.0 00:25:19.490 Removing: /dev/shm/spdk_tgt_trace.pid70761 00:25:19.490 Removing: /var/run/dpdk/spdk0 00:25:19.490 Removing: /var/run/dpdk/spdk1 00:25:19.490 Removing: /var/run/dpdk/spdk2 00:25:19.490 Removing: /var/run/dpdk/spdk3 00:25:19.490 Removing: /var/run/dpdk/spdk4 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100609 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100644 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100685 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100929 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100964 00:25:19.490 Removing: /var/run/dpdk/spdk_pid100994 00:25:19.490 Removing: /var/run/dpdk/spdk_pid101463 00:25:19.490 Removing: /var/run/dpdk/spdk_pid101474 00:25:19.490 Removing: /var/run/dpdk/spdk_pid101713 00:25:19.490 Removing: /var/run/dpdk/spdk_pid101836 00:25:19.490 Removing: /var/run/dpdk/spdk_pid101841 00:25:19.490 Removing: /var/run/dpdk/spdk_pid70613 00:25:19.490 Removing: /var/run/dpdk/spdk_pid70761 00:25:19.490 Removing: /var/run/dpdk/spdk_pid70954 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71035 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71055 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71159 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71169 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71303 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71503 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71653 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71731 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71802 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71888 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71960 00:25:19.490 Removing: /var/run/dpdk/spdk_pid71993 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72028 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72098 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72179 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72623 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72664 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72715 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72718 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72786 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72789 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72843 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72852 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72897 00:25:19.490 Removing: /var/run/dpdk/spdk_pid72903 00:25:19.749 Removing: /var/run/dpdk/spdk_pid72948 00:25:19.749 Removing: /var/run/dpdk/spdk_pid72966 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73098 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73128 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73211 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73537 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73549 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73580 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73594 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73609 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73628 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73642 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73657 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73671 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73684 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73700 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73719 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73732 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73748 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73761 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73775 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73790 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73804 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73818 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73833 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73863 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73877 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73906 00:25:19.749 Removing: /var/run/dpdk/spdk_pid73973 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74001 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74011 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74038 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74049 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74051 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74093 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74107 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74130 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74140 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74149 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74153 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74168 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74172 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74176 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74191 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74214 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74240 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74250 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74273 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74287 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74292 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74327 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74344 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74365 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74378 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74380 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74382 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74395 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74397 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74399 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74412 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74483 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74525 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74632 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74671 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74715 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74725 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74747 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74756 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74793 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74803 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74881 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74897 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74939 00:25:19.749 Removing: /var/run/dpdk/spdk_pid74999 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75049 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75073 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75171 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75215 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75247 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75474 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75566 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75589 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75618 00:25:19.749 Removing: /var/run/dpdk/spdk_pid75652 00:25:19.750 Removing: /var/run/dpdk/spdk_pid75685 00:25:20.016 Removing: /var/run/dpdk/spdk_pid75719 00:25:20.016 Removing: /var/run/dpdk/spdk_pid75750 00:25:20.016 Removing: /var/run/dpdk/spdk_pid76149 00:25:20.016 Removing: /var/run/dpdk/spdk_pid76189 00:25:20.016 Removing: /var/run/dpdk/spdk_pid76518 00:25:20.016 Removing: /var/run/dpdk/spdk_pid76972 00:25:20.016 Removing: /var/run/dpdk/spdk_pid77237 00:25:20.016 Removing: /var/run/dpdk/spdk_pid78062 00:25:20.016 Removing: /var/run/dpdk/spdk_pid78968 00:25:20.016 Removing: /var/run/dpdk/spdk_pid79091 00:25:20.016 Removing: /var/run/dpdk/spdk_pid79153 00:25:20.016 Removing: /var/run/dpdk/spdk_pid80549 00:25:20.016 Removing: /var/run/dpdk/spdk_pid80856 00:25:20.016 Removing: /var/run/dpdk/spdk_pid84568 00:25:20.016 Removing: /var/run/dpdk/spdk_pid84928 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85037 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85164 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85185 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85209 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85230 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85323 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85452 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85590 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85667 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85854 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85917 00:25:20.016 Removing: /var/run/dpdk/spdk_pid85996 00:25:20.016 Removing: /var/run/dpdk/spdk_pid86356 00:25:20.016 Removing: /var/run/dpdk/spdk_pid86770 00:25:20.016 Removing: /var/run/dpdk/spdk_pid86771 00:25:20.016 Removing: /var/run/dpdk/spdk_pid86772 00:25:20.016 Removing: /var/run/dpdk/spdk_pid87028 00:25:20.016 Removing: /var/run/dpdk/spdk_pid87271 00:25:20.016 Removing: /var/run/dpdk/spdk_pid87273 00:25:20.016 Removing: /var/run/dpdk/spdk_pid89565 00:25:20.016 Removing: /var/run/dpdk/spdk_pid89950 00:25:20.016 Removing: /var/run/dpdk/spdk_pid89952 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90287 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90301 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90321 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90349 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90354 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90443 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90445 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90553 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90555 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90663 00:25:20.016 Removing: /var/run/dpdk/spdk_pid90671 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91106 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91149 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91258 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91337 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91685 00:25:20.016 Removing: /var/run/dpdk/spdk_pid91874 00:25:20.016 Removing: /var/run/dpdk/spdk_pid92291 00:25:20.016 Removing: /var/run/dpdk/spdk_pid92829 00:25:20.016 Removing: /var/run/dpdk/spdk_pid93684 00:25:20.016 Removing: /var/run/dpdk/spdk_pid94320 00:25:20.016 Removing: /var/run/dpdk/spdk_pid94322 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96315 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96362 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96421 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96471 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96571 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96618 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96671 00:25:20.016 Removing: /var/run/dpdk/spdk_pid96717 00:25:20.016 Removing: /var/run/dpdk/spdk_pid97064 00:25:20.016 Removing: /var/run/dpdk/spdk_pid98261 00:25:20.016 Removing: /var/run/dpdk/spdk_pid98394 00:25:20.016 Removing: /var/run/dpdk/spdk_pid98628 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99221 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99375 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99534 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99631 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99796 00:25:20.017 Removing: /var/run/dpdk/spdk_pid99899 00:25:20.017 Clean 00:25:20.334 14:41:12 -- common/autotest_common.sh@1453 -- # return 0 00:25:20.334 14:41:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:20.334 14:41:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.334 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:25:20.334 14:41:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:20.334 14:41:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.334 14:41:12 -- common/autotest_common.sh@10 -- # set +x 00:25:20.334 14:41:12 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:20.334 14:41:12 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:20.334 14:41:12 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:20.334 14:41:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:20.334 14:41:12 -- spdk/autotest.sh@398 -- # hostname 00:25:20.334 14:41:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:20.619 geninfo: WARNING: invalid characters removed from testname! 00:25:42.553 14:41:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:45.091 14:41:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:47.627 14:41:39 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.163 14:41:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:52.067 14:41:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.601 14:41:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:57.135 14:41:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:57.135 14:41:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:57.135 14:41:49 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:57.135 14:41:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:57.135 14:41:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:57.135 14:41:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:57.135 + [[ -n 5995 ]] 00:25:57.135 + sudo kill 5995 00:25:57.145 [Pipeline] } 00:25:57.161 [Pipeline] // timeout 00:25:57.166 [Pipeline] } 00:25:57.181 [Pipeline] // stage 00:25:57.186 [Pipeline] } 00:25:57.200 [Pipeline] // catchError 00:25:57.210 [Pipeline] stage 00:25:57.212 [Pipeline] { (Stop VM) 00:25:57.225 [Pipeline] sh 00:25:57.505 + vagrant halt 00:26:00.041 ==> default: Halting domain... 00:26:06.618 [Pipeline] sh 00:26:06.898 + vagrant destroy -f 00:26:09.488 ==> default: Removing domain... 00:26:09.759 [Pipeline] sh 00:26:10.038 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:10.047 [Pipeline] } 00:26:10.061 [Pipeline] // stage 00:26:10.066 [Pipeline] } 00:26:10.082 [Pipeline] // dir 00:26:10.087 [Pipeline] } 00:26:10.103 [Pipeline] // wrap 00:26:10.110 [Pipeline] } 00:26:10.123 [Pipeline] // catchError 00:26:10.132 [Pipeline] stage 00:26:10.135 [Pipeline] { (Epilogue) 00:26:10.149 [Pipeline] sh 00:26:10.430 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:15.715 [Pipeline] catchError 00:26:15.717 [Pipeline] { 00:26:15.730 [Pipeline] sh 00:26:16.013 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:16.272 Artifacts sizes are good 00:26:16.281 [Pipeline] } 00:26:16.295 [Pipeline] // catchError 00:26:16.306 [Pipeline] archiveArtifacts 00:26:16.313 Archiving artifacts 00:26:16.433 [Pipeline] cleanWs 00:26:16.444 [WS-CLEANUP] Deleting project workspace... 00:26:16.445 [WS-CLEANUP] Deferred wipeout is used... 00:26:16.451 [WS-CLEANUP] done 00:26:16.453 [Pipeline] } 00:26:16.467 [Pipeline] // stage 00:26:16.472 [Pipeline] } 00:26:16.486 [Pipeline] // node 00:26:16.491 [Pipeline] End of Pipeline 00:26:16.540 Finished: SUCCESS